00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2021 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3286 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.186 > git --version # 'git version 2.39.2' 00:00:00.186 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.224 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.236 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.236 > git config core.sparsecheckout # timeout=10 00:00:04.247 > git read-tree -mu HEAD # timeout=10 00:00:04.264 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:04.283 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:04.283 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:04.394 [Pipeline] Start of Pipeline 00:00:04.409 [Pipeline] library 00:00:04.411 Loading library shm_lib@master 00:00:04.411 Library shm_lib@master is cached. Copying from home. 00:00:04.428 [Pipeline] node 00:00:04.444 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.445 [Pipeline] { 00:00:04.456 [Pipeline] catchError 00:00:04.457 [Pipeline] { 00:00:04.467 [Pipeline] wrap 00:00:04.476 [Pipeline] { 00:00:04.484 [Pipeline] stage 00:00:04.486 [Pipeline] { (Prologue) 00:00:04.665 [Pipeline] sh 00:00:04.942 + logger -p user.info -t JENKINS-CI 00:00:04.964 [Pipeline] echo 00:00:04.966 Node: GP11 00:00:04.974 [Pipeline] sh 00:00:05.268 [Pipeline] setCustomBuildProperty 00:00:05.280 [Pipeline] echo 00:00:05.282 Cleanup processes 00:00:05.285 [Pipeline] sh 00:00:05.559 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.559 3878954 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.572 [Pipeline] sh 00:00:05.854 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.854 ++ grep -v 'sudo pgrep' 00:00:05.854 ++ awk '{print $1}' 00:00:05.854 + sudo kill -9 00:00:05.854 + true 00:00:05.867 [Pipeline] cleanWs 00:00:05.876 [WS-CLEANUP] Deleting project workspace... 00:00:05.876 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.881 [WS-CLEANUP] done 00:00:05.885 [Pipeline] setCustomBuildProperty 00:00:05.897 [Pipeline] sh 00:00:06.175 + sudo git config --global --replace-all safe.directory '*' 00:00:06.266 [Pipeline] httpRequest 00:00:06.294 [Pipeline] echo 00:00:06.296 Sorcerer 10.211.164.101 is alive 00:00:06.304 [Pipeline] httpRequest 00:00:06.309 HttpMethod: GET 00:00:06.309 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.309 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.329 Response Code: HTTP/1.1 200 OK 00:00:06.329 Success: Status code 200 is in the accepted range: 200,404 00:00:06.330 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.226 [Pipeline] sh 00:00:10.508 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.525 [Pipeline] httpRequest 00:00:10.551 [Pipeline] echo 00:00:10.553 Sorcerer 10.211.164.101 is alive 00:00:10.560 [Pipeline] httpRequest 00:00:10.564 HttpMethod: GET 00:00:10.565 URL: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:10.566 Sending request to url: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:10.579 Response Code: HTTP/1.1 200 OK 00:00:10.580 Success: Status code 200 is in the accepted range: 200,404 00:00:10.580 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:56.436 [Pipeline] sh 00:00:56.716 + tar --no-same-owner -xf spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:01:00.008 [Pipeline] sh 00:01:00.284 + git -C spdk log --oneline -n5 00:01:00.284 89fd17309 bdev/raid: add qos for raid process 00:01:00.284 9645ea138 util: move module/sock/sock_kernel.h contents to net.c 00:01:00.284 e8671c893 util: add spdk_net_get_interface_name 00:01:00.284 7798a2572 scripts/nvmf_perf: set all NIC RX queues at once 00:01:00.284 986fe0958 scripts/nvmf_perf: indent multi-line strings 00:01:00.300 [Pipeline] withCredentials 00:01:00.310 > git --version # timeout=10 00:01:00.320 > git --version # 'git version 2.39.2' 00:01:00.337 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:00.339 [Pipeline] { 00:01:00.347 [Pipeline] retry 00:01:00.349 [Pipeline] { 00:01:00.366 [Pipeline] sh 00:01:00.654 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:01.255 [Pipeline] } 00:01:01.275 [Pipeline] // retry 00:01:01.279 [Pipeline] } 00:01:01.299 [Pipeline] // withCredentials 00:01:01.309 [Pipeline] httpRequest 00:01:01.331 [Pipeline] echo 00:01:01.332 Sorcerer 10.211.164.101 is alive 00:01:01.338 [Pipeline] httpRequest 00:01:01.342 HttpMethod: GET 00:01:01.343 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:01.343 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:01.371 Response Code: HTTP/1.1 200 OK 00:01:01.372 Success: Status code 200 is in the accepted range: 200,404 00:01:01.372 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:13.788 [Pipeline] sh 00:04:14.066 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:15.969 [Pipeline] sh 00:04:16.242 + git -C dpdk log --oneline -n5 00:04:16.242 caf0f5d395 version: 22.11.4 00:04:16.242 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:16.242 dc9c799c7d vhost: fix missing spinlock unlock 00:04:16.242 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:16.242 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:16.252 [Pipeline] } 00:04:16.268 [Pipeline] // stage 00:04:16.276 [Pipeline] stage 00:04:16.278 [Pipeline] { (Prepare) 00:04:16.300 [Pipeline] writeFile 00:04:16.316 [Pipeline] sh 00:04:16.592 + logger -p user.info -t JENKINS-CI 00:04:16.604 [Pipeline] sh 00:04:16.883 + logger -p user.info -t JENKINS-CI 00:04:16.894 [Pipeline] sh 00:04:17.173 + cat autorun-spdk.conf 00:04:17.173 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:17.173 SPDK_TEST_NVMF=1 00:04:17.173 SPDK_TEST_NVME_CLI=1 00:04:17.173 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:17.173 SPDK_TEST_NVMF_NICS=e810 00:04:17.173 SPDK_TEST_VFIOUSER=1 00:04:17.173 SPDK_RUN_UBSAN=1 00:04:17.173 NET_TYPE=phy 00:04:17.173 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:17.173 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:17.179 RUN_NIGHTLY=1 00:04:17.185 [Pipeline] readFile 00:04:17.203 [Pipeline] withEnv 00:04:17.204 [Pipeline] { 00:04:17.215 [Pipeline] sh 00:04:17.500 + set -ex 00:04:17.500 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:17.500 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:17.500 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:17.500 ++ SPDK_TEST_NVMF=1 00:04:17.500 ++ SPDK_TEST_NVME_CLI=1 00:04:17.500 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:17.500 ++ SPDK_TEST_NVMF_NICS=e810 00:04:17.500 ++ SPDK_TEST_VFIOUSER=1 00:04:17.500 ++ SPDK_RUN_UBSAN=1 00:04:17.500 ++ NET_TYPE=phy 00:04:17.500 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:17.500 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:17.500 ++ RUN_NIGHTLY=1 00:04:17.500 + case $SPDK_TEST_NVMF_NICS in 00:04:17.500 + DRIVERS=ice 00:04:17.500 + [[ tcp == \r\d\m\a ]] 00:04:17.500 + [[ -n ice ]] 00:04:17.500 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:17.500 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:17.500 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:17.500 rmmod: ERROR: Module irdma is not currently loaded 00:04:17.500 rmmod: ERROR: Module i40iw is not currently loaded 00:04:17.500 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:17.500 + true 00:04:17.500 + for D in $DRIVERS 00:04:17.500 + sudo modprobe ice 00:04:17.500 + exit 0 00:04:17.509 [Pipeline] } 00:04:17.526 [Pipeline] // withEnv 00:04:17.531 [Pipeline] } 00:04:17.547 [Pipeline] // stage 00:04:17.557 [Pipeline] catchError 00:04:17.558 [Pipeline] { 00:04:17.594 [Pipeline] timeout 00:04:17.595 Timeout set to expire in 50 min 00:04:17.596 [Pipeline] { 00:04:17.609 [Pipeline] stage 00:04:17.611 [Pipeline] { (Tests) 00:04:17.626 [Pipeline] sh 00:04:17.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:17.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:17.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:17.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:17.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:17.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:17.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:17.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:17.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:17.905 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:17.905 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:17.905 + source /etc/os-release 00:04:17.905 ++ NAME='Fedora Linux' 00:04:17.905 ++ VERSION='38 (Cloud Edition)' 00:04:17.905 ++ ID=fedora 00:04:17.905 ++ VERSION_ID=38 00:04:17.905 ++ VERSION_CODENAME= 00:04:17.905 ++ PLATFORM_ID=platform:f38 00:04:17.905 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:17.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:17.905 ++ LOGO=fedora-logo-icon 00:04:17.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:17.905 ++ HOME_URL=https://fedoraproject.org/ 00:04:17.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:17.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:17.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:17.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:17.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:17.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:17.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:17.905 ++ SUPPORT_END=2024-05-14 00:04:17.905 ++ VARIANT='Cloud Edition' 00:04:17.905 ++ VARIANT_ID=cloud 00:04:17.905 + uname -a 00:04:17.905 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:17.905 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.835 Hugepages 00:04:18.835 node hugesize free / total 00:04:19.093 node0 1048576kB 0 / 0 00:04:19.093 node0 2048kB 0 / 0 00:04:19.093 node1 1048576kB 0 / 0 00:04:19.093 node1 2048kB 0 / 0 00:04:19.093 00:04:19.093 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.093 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:19.093 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:19.093 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:19.093 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:19.093 + rm -f /tmp/spdk-ld-path 00:04:19.093 + source autorun-spdk.conf 00:04:19.093 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:19.093 ++ SPDK_TEST_NVMF=1 00:04:19.093 ++ SPDK_TEST_NVME_CLI=1 00:04:19.093 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:19.093 ++ SPDK_TEST_NVMF_NICS=e810 00:04:19.093 ++ SPDK_TEST_VFIOUSER=1 00:04:19.093 ++ SPDK_RUN_UBSAN=1 00:04:19.093 ++ NET_TYPE=phy 00:04:19.093 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:19.093 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:19.093 ++ RUN_NIGHTLY=1 00:04:19.093 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:19.093 + [[ -n '' ]] 00:04:19.093 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.093 + for M in /var/spdk/build-*-manifest.txt 00:04:19.093 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:19.093 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:19.093 + for M in /var/spdk/build-*-manifest.txt 00:04:19.093 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:19.093 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:19.093 ++ uname 00:04:19.093 + [[ Linux == \L\i\n\u\x ]] 00:04:19.093 + sudo dmesg -T 00:04:19.093 + sudo dmesg --clear 00:04:19.093 + dmesg_pid=3880918 00:04:19.093 + [[ Fedora Linux == FreeBSD ]] 00:04:19.093 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:19.093 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:19.093 + sudo dmesg -Tw 00:04:19.093 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:19.093 + [[ -x /usr/src/fio-static/fio ]] 00:04:19.093 + export FIO_BIN=/usr/src/fio-static/fio 00:04:19.093 + FIO_BIN=/usr/src/fio-static/fio 00:04:19.093 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:19.093 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:19.093 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:19.093 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:19.093 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:19.093 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:19.093 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:19.093 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:19.093 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:19.093 Test configuration: 00:04:19.093 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:19.093 SPDK_TEST_NVMF=1 00:04:19.093 SPDK_TEST_NVME_CLI=1 00:04:19.093 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:19.093 SPDK_TEST_NVMF_NICS=e810 00:04:19.093 SPDK_TEST_VFIOUSER=1 00:04:19.093 SPDK_RUN_UBSAN=1 00:04:19.093 NET_TYPE=phy 00:04:19.093 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:19.093 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:19.093 RUN_NIGHTLY=1 08:00:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.093 08:00:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:19.093 08:00:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.093 08:00:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.093 08:00:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.093 08:00:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.093 08:00:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.093 08:00:28 -- paths/export.sh@5 -- $ export PATH 00:04:19.093 08:00:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.093 08:00:28 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:19.361 08:00:28 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:19.361 08:00:28 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721541628.XXXXXX 00:04:19.361 08:00:28 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721541628.Dv7jxR 00:04:19.361 08:00:28 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:19.361 08:00:28 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:04:19.361 08:00:28 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:19.361 08:00:28 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:04:19.361 08:00:28 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:19.361 08:00:28 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:19.361 08:00:28 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:19.361 08:00:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:04:19.361 08:00:28 -- common/autotest_common.sh@10 -- $ set +x 00:04:19.361 08:00:28 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:04:19.361 08:00:28 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:19.361 08:00:28 -- pm/common@17 -- $ local monitor 00:04:19.361 08:00:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.361 08:00:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.361 08:00:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.361 08:00:28 -- pm/common@21 -- $ date +%s 00:04:19.361 08:00:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.361 08:00:28 -- pm/common@21 -- $ date +%s 00:04:19.361 08:00:28 -- pm/common@25 -- $ sleep 1 00:04:19.361 08:00:28 -- pm/common@21 -- $ date +%s 00:04:19.361 08:00:28 -- pm/common@21 -- $ date +%s 00:04:19.361 08:00:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721541628 00:04:19.361 08:00:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721541628 00:04:19.361 08:00:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721541628 00:04:19.361 08:00:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721541628 00:04:19.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721541628_collect-vmstat.pm.log 00:04:19.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721541628_collect-cpu-load.pm.log 00:04:19.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721541628_collect-cpu-temp.pm.log 00:04:19.361 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721541628_collect-bmc-pm.bmc.pm.log 00:04:20.297 08:00:29 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:20.297 08:00:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:20.297 08:00:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:20.297 08:00:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.297 08:00:29 -- spdk/autobuild.sh@16 -- $ date -u 00:04:20.297 Sun Jul 21 06:00:29 AM UTC 2024 00:04:20.297 08:00:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:20.297 v24.09-pre-254-g89fd17309 00:04:20.297 08:00:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:20.297 08:00:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:20.297 08:00:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:20.297 08:00:29 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:20.297 08:00:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:20.297 08:00:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.297 ************************************ 00:04:20.297 START TEST ubsan 00:04:20.297 ************************************ 00:04:20.297 08:00:29 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:04:20.297 using ubsan 00:04:20.297 00:04:20.297 real 0m0.000s 00:04:20.297 user 0m0.000s 00:04:20.297 sys 0m0.000s 00:04:20.297 08:00:29 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:20.297 08:00:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:20.297 ************************************ 00:04:20.297 END TEST ubsan 00:04:20.297 ************************************ 00:04:20.297 08:00:29 -- common/autotest_common.sh@1142 -- $ return 0 00:04:20.297 08:00:29 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:04:20.297 08:00:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:04:20.297 08:00:29 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:04:20.297 08:00:29 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:04:20.297 08:00:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:20.297 08:00:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.297 ************************************ 00:04:20.297 START TEST build_native_dpdk 00:04:20.297 ************************************ 00:04:20.297 08:00:29 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:04:20.297 caf0f5d395 version: 22.11.4 00:04:20.297 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:20.297 dc9c799c7d vhost: fix missing spinlock unlock 00:04:20.297 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:20.297 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:04:20.297 patching file config/rte_config.h 00:04:20.297 Hunk #1 succeeded at 60 (offset 1 line). 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:04:20.297 08:00:29 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:04:20.297 patching file lib/pcapng/rte_pcapng.c 00:04:20.297 Hunk #1 succeeded at 110 (offset -18 lines). 00:04:20.297 08:00:29 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:04:20.298 08:00:29 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:04:20.298 08:00:29 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:04:20.298 08:00:29 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:04:20.298 08:00:29 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:24.488 The Meson build system 00:04:24.488 Version: 1.3.1 00:04:24.488 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:04:24.488 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:04:24.488 Build type: native build 00:04:24.488 Program cat found: YES (/usr/bin/cat) 00:04:24.488 Project name: DPDK 00:04:24.488 Project version: 22.11.4 00:04:24.488 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:24.488 C linker for the host machine: gcc ld.bfd 2.39-16 00:04:24.488 Host machine cpu family: x86_64 00:04:24.488 Host machine cpu: x86_64 00:04:24.488 Message: ## Building in Developer Mode ## 00:04:24.488 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:24.488 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:04:24.488 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:04:24.488 Program objdump found: YES (/usr/bin/objdump) 00:04:24.488 Program python3 found: YES (/usr/bin/python3) 00:04:24.488 Program cat found: YES (/usr/bin/cat) 00:04:24.488 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:04:24.488 Checking for size of "void *" : 8 00:04:24.488 Checking for size of "void *" : 8 (cached) 00:04:24.488 Library m found: YES 00:04:24.488 Library numa found: YES 00:04:24.488 Has header "numaif.h" : YES 00:04:24.488 Library fdt found: NO 00:04:24.488 Library execinfo found: NO 00:04:24.488 Has header "execinfo.h" : YES 00:04:24.488 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:24.488 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:24.488 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:24.488 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:24.488 Run-time dependency openssl found: YES 3.0.9 00:04:24.488 Run-time dependency libpcap found: YES 1.10.4 00:04:24.488 Has header "pcap.h" with dependency libpcap: YES 00:04:24.488 Compiler for C supports arguments -Wcast-qual: YES 00:04:24.488 Compiler for C supports arguments -Wdeprecated: YES 00:04:24.488 Compiler for C supports arguments -Wformat: YES 00:04:24.488 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:24.488 Compiler for C supports arguments -Wformat-security: NO 00:04:24.488 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:24.488 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:24.488 Compiler for C supports arguments -Wnested-externs: YES 00:04:24.488 Compiler for C supports arguments -Wold-style-definition: YES 00:04:24.488 Compiler for C supports arguments -Wpointer-arith: YES 00:04:24.488 Compiler for C supports arguments -Wsign-compare: YES 00:04:24.488 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:24.488 Compiler for C supports arguments -Wundef: YES 00:04:24.488 Compiler for C supports arguments -Wwrite-strings: YES 00:04:24.488 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:24.488 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:24.488 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:24.488 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:24.488 Compiler for C supports arguments -mavx512f: YES 00:04:24.488 Checking if "AVX512 checking" compiles: YES 00:04:24.488 Fetching value of define "__SSE4_2__" : 1 00:04:24.488 Fetching value of define "__AES__" : 1 00:04:24.488 Fetching value of define "__AVX__" : 1 00:04:24.488 Fetching value of define "__AVX2__" : (undefined) 00:04:24.488 Fetching value of define "__AVX512BW__" : (undefined) 00:04:24.488 Fetching value of define "__AVX512CD__" : (undefined) 00:04:24.488 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:24.488 Fetching value of define "__AVX512F__" : (undefined) 00:04:24.488 Fetching value of define "__AVX512VL__" : (undefined) 00:04:24.488 Fetching value of define "__PCLMUL__" : 1 00:04:24.488 Fetching value of define "__RDRND__" : 1 00:04:24.488 Fetching value of define "__RDSEED__" : (undefined) 00:04:24.488 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:24.488 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:24.488 Message: lib/kvargs: Defining dependency "kvargs" 00:04:24.488 Message: lib/telemetry: Defining dependency "telemetry" 00:04:24.488 Checking for function "getentropy" : YES 00:04:24.488 Message: lib/eal: Defining dependency "eal" 00:04:24.488 Message: lib/ring: Defining dependency "ring" 00:04:24.488 Message: lib/rcu: Defining dependency "rcu" 00:04:24.488 Message: lib/mempool: Defining dependency "mempool" 00:04:24.488 Message: lib/mbuf: Defining dependency "mbuf" 00:04:24.488 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:24.488 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:24.488 Compiler for C supports arguments -mpclmul: YES 00:04:24.488 Compiler for C supports arguments -maes: YES 00:04:24.488 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:24.488 Compiler for C supports arguments -mavx512bw: YES 00:04:24.488 Compiler for C supports arguments -mavx512dq: YES 00:04:24.488 Compiler for C supports arguments -mavx512vl: YES 00:04:24.488 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:24.488 Compiler for C supports arguments -mavx2: YES 00:04:24.488 Compiler for C supports arguments -mavx: YES 00:04:24.488 Message: lib/net: Defining dependency "net" 00:04:24.488 Message: lib/meter: Defining dependency "meter" 00:04:24.488 Message: lib/ethdev: Defining dependency "ethdev" 00:04:24.488 Message: lib/pci: Defining dependency "pci" 00:04:24.488 Message: lib/cmdline: Defining dependency "cmdline" 00:04:24.488 Message: lib/metrics: Defining dependency "metrics" 00:04:24.488 Message: lib/hash: Defining dependency "hash" 00:04:24.488 Message: lib/timer: Defining dependency "timer" 00:04:24.488 Fetching value of define "__AVX2__" : (undefined) (cached) 00:04:24.488 Compiler for C supports arguments -mavx2: YES (cached) 00:04:24.488 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:04:24.488 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:04:24.488 Message: lib/acl: Defining dependency "acl" 00:04:24.488 Message: lib/bbdev: Defining dependency "bbdev" 00:04:24.488 Message: lib/bitratestats: Defining dependency "bitratestats" 00:04:24.488 Run-time dependency libelf found: YES 0.190 00:04:24.488 Message: lib/bpf: Defining dependency "bpf" 00:04:24.488 Message: lib/cfgfile: Defining dependency "cfgfile" 00:04:24.488 Message: lib/compressdev: Defining dependency "compressdev" 00:04:24.488 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:24.488 Message: lib/distributor: Defining dependency "distributor" 00:04:24.488 Message: lib/efd: Defining dependency "efd" 00:04:24.488 Message: lib/eventdev: Defining dependency "eventdev" 00:04:24.488 Message: lib/gpudev: Defining dependency "gpudev" 00:04:24.488 Message: lib/gro: Defining dependency "gro" 00:04:24.488 Message: lib/gso: Defining dependency "gso" 00:04:24.488 Message: lib/ip_frag: Defining dependency "ip_frag" 00:04:24.488 Message: lib/jobstats: Defining dependency "jobstats" 00:04:24.488 Message: lib/latencystats: Defining dependency "latencystats" 00:04:24.488 Message: lib/lpm: Defining dependency "lpm" 00:04:24.488 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512IFMA__" : (undefined) 00:04:24.488 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:04:24.488 Message: lib/member: Defining dependency "member" 00:04:24.488 Message: lib/pcapng: Defining dependency "pcapng" 00:04:24.488 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:24.488 Message: lib/power: Defining dependency "power" 00:04:24.488 Message: lib/rawdev: Defining dependency "rawdev" 00:04:24.488 Message: lib/regexdev: Defining dependency "regexdev" 00:04:24.488 Message: lib/dmadev: Defining dependency "dmadev" 00:04:24.488 Message: lib/rib: Defining dependency "rib" 00:04:24.488 Message: lib/reorder: Defining dependency "reorder" 00:04:24.488 Message: lib/sched: Defining dependency "sched" 00:04:24.488 Message: lib/security: Defining dependency "security" 00:04:24.488 Message: lib/stack: Defining dependency "stack" 00:04:24.488 Has header "linux/userfaultfd.h" : YES 00:04:24.488 Message: lib/vhost: Defining dependency "vhost" 00:04:24.488 Message: lib/ipsec: Defining dependency "ipsec" 00:04:24.488 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:24.488 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:04:24.488 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:04:24.488 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:24.488 Message: lib/fib: Defining dependency "fib" 00:04:24.488 Message: lib/port: Defining dependency "port" 00:04:24.488 Message: lib/pdump: Defining dependency "pdump" 00:04:24.488 Message: lib/table: Defining dependency "table" 00:04:24.488 Message: lib/pipeline: Defining dependency "pipeline" 00:04:24.488 Message: lib/graph: Defining dependency "graph" 00:04:24.488 Message: lib/node: Defining dependency "node" 00:04:24.488 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:24.488 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:24.488 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:24.488 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:24.488 Compiler for C supports arguments -Wno-sign-compare: YES 00:04:24.488 Compiler for C supports arguments -Wno-unused-value: YES 00:04:25.416 Compiler for C supports arguments -Wno-format: YES 00:04:25.416 Compiler for C supports arguments -Wno-format-security: YES 00:04:25.416 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:04:25.416 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:25.416 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:04:25.416 Compiler for C supports arguments -Wno-unused-parameter: YES 00:04:25.416 Fetching value of define "__AVX2__" : (undefined) (cached) 00:04:25.416 Compiler for C supports arguments -mavx2: YES (cached) 00:04:25.416 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:25.416 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:25.416 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:25.416 Compiler for C supports arguments -march=skylake-avx512: YES 00:04:25.416 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:04:25.416 Program doxygen found: YES (/usr/bin/doxygen) 00:04:25.416 Configuring doxy-api.conf using configuration 00:04:25.416 Program sphinx-build found: NO 00:04:25.417 Configuring rte_build_config.h using configuration 00:04:25.417 Message: 00:04:25.417 ================= 00:04:25.417 Applications Enabled 00:04:25.417 ================= 00:04:25.417 00:04:25.417 apps: 00:04:25.417 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:04:25.417 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:04:25.417 test-security-perf, 00:04:25.417 00:04:25.417 Message: 00:04:25.417 ================= 00:04:25.417 Libraries Enabled 00:04:25.417 ================= 00:04:25.417 00:04:25.417 libs: 00:04:25.417 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:04:25.417 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:04:25.417 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:04:25.417 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:04:25.417 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:04:25.417 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:04:25.417 table, pipeline, graph, node, 00:04:25.417 00:04:25.417 Message: 00:04:25.417 =============== 00:04:25.417 Drivers Enabled 00:04:25.417 =============== 00:04:25.417 00:04:25.417 common: 00:04:25.417 00:04:25.417 bus: 00:04:25.417 pci, vdev, 00:04:25.417 mempool: 00:04:25.417 ring, 00:04:25.417 dma: 00:04:25.417 00:04:25.417 net: 00:04:25.417 i40e, 00:04:25.417 raw: 00:04:25.417 00:04:25.417 crypto: 00:04:25.417 00:04:25.417 compress: 00:04:25.417 00:04:25.417 regex: 00:04:25.417 00:04:25.417 vdpa: 00:04:25.417 00:04:25.417 event: 00:04:25.417 00:04:25.417 baseband: 00:04:25.417 00:04:25.417 gpu: 00:04:25.417 00:04:25.417 00:04:25.417 Message: 00:04:25.417 ================= 00:04:25.417 Content Skipped 00:04:25.417 ================= 00:04:25.417 00:04:25.417 apps: 00:04:25.417 00:04:25.417 libs: 00:04:25.417 kni: explicitly disabled via build config (deprecated lib) 00:04:25.417 flow_classify: explicitly disabled via build config (deprecated lib) 00:04:25.417 00:04:25.417 drivers: 00:04:25.417 common/cpt: not in enabled drivers build config 00:04:25.417 common/dpaax: not in enabled drivers build config 00:04:25.417 common/iavf: not in enabled drivers build config 00:04:25.417 common/idpf: not in enabled drivers build config 00:04:25.417 common/mvep: not in enabled drivers build config 00:04:25.417 common/octeontx: not in enabled drivers build config 00:04:25.417 bus/auxiliary: not in enabled drivers build config 00:04:25.417 bus/dpaa: not in enabled drivers build config 00:04:25.417 bus/fslmc: not in enabled drivers build config 00:04:25.417 bus/ifpga: not in enabled drivers build config 00:04:25.417 bus/vmbus: not in enabled drivers build config 00:04:25.417 common/cnxk: not in enabled drivers build config 00:04:25.417 common/mlx5: not in enabled drivers build config 00:04:25.417 common/qat: not in enabled drivers build config 00:04:25.417 common/sfc_efx: not in enabled drivers build config 00:04:25.417 mempool/bucket: not in enabled drivers build config 00:04:25.417 mempool/cnxk: not in enabled drivers build config 00:04:25.417 mempool/dpaa: not in enabled drivers build config 00:04:25.417 mempool/dpaa2: not in enabled drivers build config 00:04:25.417 mempool/octeontx: not in enabled drivers build config 00:04:25.417 mempool/stack: not in enabled drivers build config 00:04:25.417 dma/cnxk: not in enabled drivers build config 00:04:25.417 dma/dpaa: not in enabled drivers build config 00:04:25.417 dma/dpaa2: not in enabled drivers build config 00:04:25.417 dma/hisilicon: not in enabled drivers build config 00:04:25.417 dma/idxd: not in enabled drivers build config 00:04:25.417 dma/ioat: not in enabled drivers build config 00:04:25.417 dma/skeleton: not in enabled drivers build config 00:04:25.417 net/af_packet: not in enabled drivers build config 00:04:25.417 net/af_xdp: not in enabled drivers build config 00:04:25.417 net/ark: not in enabled drivers build config 00:04:25.417 net/atlantic: not in enabled drivers build config 00:04:25.417 net/avp: not in enabled drivers build config 00:04:25.417 net/axgbe: not in enabled drivers build config 00:04:25.417 net/bnx2x: not in enabled drivers build config 00:04:25.417 net/bnxt: not in enabled drivers build config 00:04:25.417 net/bonding: not in enabled drivers build config 00:04:25.417 net/cnxk: not in enabled drivers build config 00:04:25.417 net/cxgbe: not in enabled drivers build config 00:04:25.417 net/dpaa: not in enabled drivers build config 00:04:25.417 net/dpaa2: not in enabled drivers build config 00:04:25.417 net/e1000: not in enabled drivers build config 00:04:25.417 net/ena: not in enabled drivers build config 00:04:25.417 net/enetc: not in enabled drivers build config 00:04:25.417 net/enetfec: not in enabled drivers build config 00:04:25.417 net/enic: not in enabled drivers build config 00:04:25.417 net/failsafe: not in enabled drivers build config 00:04:25.417 net/fm10k: not in enabled drivers build config 00:04:25.417 net/gve: not in enabled drivers build config 00:04:25.417 net/hinic: not in enabled drivers build config 00:04:25.417 net/hns3: not in enabled drivers build config 00:04:25.417 net/iavf: not in enabled drivers build config 00:04:25.417 net/ice: not in enabled drivers build config 00:04:25.417 net/idpf: not in enabled drivers build config 00:04:25.417 net/igc: not in enabled drivers build config 00:04:25.417 net/ionic: not in enabled drivers build config 00:04:25.417 net/ipn3ke: not in enabled drivers build config 00:04:25.417 net/ixgbe: not in enabled drivers build config 00:04:25.417 net/kni: not in enabled drivers build config 00:04:25.417 net/liquidio: not in enabled drivers build config 00:04:25.417 net/mana: not in enabled drivers build config 00:04:25.417 net/memif: not in enabled drivers build config 00:04:25.417 net/mlx4: not in enabled drivers build config 00:04:25.417 net/mlx5: not in enabled drivers build config 00:04:25.417 net/mvneta: not in enabled drivers build config 00:04:25.417 net/mvpp2: not in enabled drivers build config 00:04:25.417 net/netvsc: not in enabled drivers build config 00:04:25.417 net/nfb: not in enabled drivers build config 00:04:25.417 net/nfp: not in enabled drivers build config 00:04:25.417 net/ngbe: not in enabled drivers build config 00:04:25.417 net/null: not in enabled drivers build config 00:04:25.417 net/octeontx: not in enabled drivers build config 00:04:25.417 net/octeon_ep: not in enabled drivers build config 00:04:25.417 net/pcap: not in enabled drivers build config 00:04:25.417 net/pfe: not in enabled drivers build config 00:04:25.417 net/qede: not in enabled drivers build config 00:04:25.417 net/ring: not in enabled drivers build config 00:04:25.417 net/sfc: not in enabled drivers build config 00:04:25.417 net/softnic: not in enabled drivers build config 00:04:25.417 net/tap: not in enabled drivers build config 00:04:25.417 net/thunderx: not in enabled drivers build config 00:04:25.417 net/txgbe: not in enabled drivers build config 00:04:25.417 net/vdev_netvsc: not in enabled drivers build config 00:04:25.417 net/vhost: not in enabled drivers build config 00:04:25.417 net/virtio: not in enabled drivers build config 00:04:25.417 net/vmxnet3: not in enabled drivers build config 00:04:25.417 raw/cnxk_bphy: not in enabled drivers build config 00:04:25.417 raw/cnxk_gpio: not in enabled drivers build config 00:04:25.417 raw/dpaa2_cmdif: not in enabled drivers build config 00:04:25.417 raw/ifpga: not in enabled drivers build config 00:04:25.417 raw/ntb: not in enabled drivers build config 00:04:25.417 raw/skeleton: not in enabled drivers build config 00:04:25.417 crypto/armv8: not in enabled drivers build config 00:04:25.417 crypto/bcmfs: not in enabled drivers build config 00:04:25.417 crypto/caam_jr: not in enabled drivers build config 00:04:25.417 crypto/ccp: not in enabled drivers build config 00:04:25.417 crypto/cnxk: not in enabled drivers build config 00:04:25.417 crypto/dpaa_sec: not in enabled drivers build config 00:04:25.417 crypto/dpaa2_sec: not in enabled drivers build config 00:04:25.417 crypto/ipsec_mb: not in enabled drivers build config 00:04:25.417 crypto/mlx5: not in enabled drivers build config 00:04:25.417 crypto/mvsam: not in enabled drivers build config 00:04:25.417 crypto/nitrox: not in enabled drivers build config 00:04:25.417 crypto/null: not in enabled drivers build config 00:04:25.417 crypto/octeontx: not in enabled drivers build config 00:04:25.417 crypto/openssl: not in enabled drivers build config 00:04:25.417 crypto/scheduler: not in enabled drivers build config 00:04:25.417 crypto/uadk: not in enabled drivers build config 00:04:25.417 crypto/virtio: not in enabled drivers build config 00:04:25.417 compress/isal: not in enabled drivers build config 00:04:25.417 compress/mlx5: not in enabled drivers build config 00:04:25.417 compress/octeontx: not in enabled drivers build config 00:04:25.417 compress/zlib: not in enabled drivers build config 00:04:25.417 regex/mlx5: not in enabled drivers build config 00:04:25.417 regex/cn9k: not in enabled drivers build config 00:04:25.417 vdpa/ifc: not in enabled drivers build config 00:04:25.417 vdpa/mlx5: not in enabled drivers build config 00:04:25.417 vdpa/sfc: not in enabled drivers build config 00:04:25.417 event/cnxk: not in enabled drivers build config 00:04:25.417 event/dlb2: not in enabled drivers build config 00:04:25.417 event/dpaa: not in enabled drivers build config 00:04:25.417 event/dpaa2: not in enabled drivers build config 00:04:25.417 event/dsw: not in enabled drivers build config 00:04:25.417 event/opdl: not in enabled drivers build config 00:04:25.417 event/skeleton: not in enabled drivers build config 00:04:25.417 event/sw: not in enabled drivers build config 00:04:25.417 event/octeontx: not in enabled drivers build config 00:04:25.417 baseband/acc: not in enabled drivers build config 00:04:25.417 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:04:25.417 baseband/fpga_lte_fec: not in enabled drivers build config 00:04:25.417 baseband/la12xx: not in enabled drivers build config 00:04:25.417 baseband/null: not in enabled drivers build config 00:04:25.417 baseband/turbo_sw: not in enabled drivers build config 00:04:25.417 gpu/cuda: not in enabled drivers build config 00:04:25.417 00:04:25.417 00:04:25.417 Build targets in project: 316 00:04:25.417 00:04:25.417 DPDK 22.11.4 00:04:25.417 00:04:25.417 User defined options 00:04:25.417 libdir : lib 00:04:25.417 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:04:25.417 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:04:25.417 c_link_args : 00:04:25.417 enable_docs : false 00:04:25.417 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:25.417 enable_kmods : false 00:04:25.417 machine : native 00:04:25.417 tests : false 00:04:25.417 00:04:25.417 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:25.417 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:04:25.417 08:00:34 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:04:25.679 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:04:25.680 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:04:25.680 [2/745] Generating lib/rte_kvargs_def with a custom command 00:04:25.680 [3/745] Generating lib/rte_telemetry_def with a custom command 00:04:25.680 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:04:25.680 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:25.680 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:25.680 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:25.680 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:25.680 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:25.680 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:25.680 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:25.680 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:25.680 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:25.680 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:25.680 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:25.680 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:25.680 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:25.680 [18/745] Linking static target lib/librte_kvargs.a 00:04:25.680 [19/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:25.680 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:25.942 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:25.942 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:25.942 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:25.942 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:25.942 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:25.942 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:25.942 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:25.942 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:25.942 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:25.942 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:25.942 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:04:25.942 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:25.942 [33/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:25.942 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:25.942 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:25.942 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:25.942 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:25.942 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:25.942 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:25.942 [40/745] Generating lib/rte_eal_def with a custom command 00:04:25.942 [41/745] Generating lib/rte_eal_mingw with a custom command 00:04:25.942 [42/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:25.942 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:25.942 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:25.942 [45/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:25.942 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:25.942 [47/745] Generating lib/rte_ring_def with a custom command 00:04:25.942 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:25.942 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:25.942 [50/745] Generating lib/rte_ring_mingw with a custom command 00:04:25.942 [51/745] Generating lib/rte_rcu_def with a custom command 00:04:25.942 [52/745] Generating lib/rte_rcu_mingw with a custom command 00:04:25.942 [53/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:25.942 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:25.942 [55/745] Generating lib/rte_mempool_def with a custom command 00:04:25.942 [56/745] Generating lib/rte_mempool_mingw with a custom command 00:04:25.942 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:25.942 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:25.942 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:25.942 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:04:25.942 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:25.942 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:25.942 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:04:25.942 [64/745] Generating lib/rte_mbuf_def with a custom command 00:04:25.942 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:25.942 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:25.942 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:25.942 [68/745] Generating lib/rte_net_def with a custom command 00:04:25.942 [69/745] Generating lib/rte_net_mingw with a custom command 00:04:25.942 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:25.942 [71/745] Generating lib/rte_meter_def with a custom command 00:04:25.942 [72/745] Generating lib/rte_meter_mingw with a custom command 00:04:26.200 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:26.200 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:26.200 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:26.200 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:26.201 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:26.201 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.201 [79/745] Generating lib/rte_ethdev_def with a custom command 00:04:26.201 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:26.201 [81/745] Linking static target lib/librte_ring.a 00:04:26.201 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:04:26.201 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:26.201 [84/745] Linking target lib/librte_kvargs.so.23.0 00:04:26.201 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:26.201 [86/745] Linking static target lib/librte_meter.a 00:04:26.201 [87/745] Generating lib/rte_pci_def with a custom command 00:04:26.462 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:26.462 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:26.462 [90/745] Generating lib/rte_pci_mingw with a custom command 00:04:26.462 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:26.462 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:26.462 [93/745] Linking static target lib/librte_pci.a 00:04:26.462 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:26.462 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:04:26.462 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:26.462 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:26.462 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:26.733 [99/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.733 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:26.733 [101/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.733 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:26.733 [103/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:26.733 [104/745] Linking static target lib/librte_telemetry.a 00:04:26.733 [105/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:26.733 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:26.733 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:26.733 [108/745] Generating lib/rte_cmdline_def with a custom command 00:04:26.733 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:26.733 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:26.733 [111/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.733 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:26.733 [113/745] Generating lib/rte_metrics_def with a custom command 00:04:26.733 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:04:26.733 [115/745] Generating lib/rte_cmdline_mingw with a custom command 00:04:26.734 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:26.734 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:26.734 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:27.000 [119/745] Generating lib/rte_hash_mingw with a custom command 00:04:27.000 [120/745] Generating lib/rte_hash_def with a custom command 00:04:27.000 [121/745] Generating lib/rte_timer_def with a custom command 00:04:27.000 [122/745] Generating lib/rte_timer_mingw with a custom command 00:04:27.000 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:27.000 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:04:27.000 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:27.263 [126/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:27.263 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:27.263 [128/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:27.263 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:27.263 [130/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:27.263 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:27.263 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:27.263 [133/745] Generating lib/rte_acl_def with a custom command 00:04:27.263 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:27.263 [135/745] Generating lib/rte_acl_mingw with a custom command 00:04:27.263 [136/745] Generating lib/rte_bbdev_def with a custom command 00:04:27.263 [137/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.263 [138/745] Generating lib/rte_bbdev_mingw with a custom command 00:04:27.263 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:27.263 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:04:27.263 [141/745] Generating lib/rte_bitratestats_def with a custom command 00:04:27.263 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:27.263 [143/745] Linking target lib/librte_telemetry.so.23.0 00:04:27.263 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:27.263 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:27.263 [146/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:27.263 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:27.520 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:27.520 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:27.520 [150/745] Generating lib/rte_bpf_def with a custom command 00:04:27.520 [151/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:27.520 [152/745] Generating lib/rte_cfgfile_def with a custom command 00:04:27.520 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:04:27.520 [154/745] Generating lib/rte_cfgfile_mingw with a custom command 00:04:27.520 [155/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:04:27.520 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:27.520 [157/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:27.520 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:27.520 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:04:27.520 [160/745] Generating lib/rte_compressdev_def with a custom command 00:04:27.520 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:27.520 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:27.783 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:04:27.783 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:27.783 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:04:27.783 [166/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:27.783 [167/745] Linking static target lib/librte_rcu.a 00:04:27.783 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:27.783 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:27.783 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:27.783 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:27.783 [172/745] Generating lib/rte_distributor_def with a custom command 00:04:27.783 [173/745] Linking static target lib/librte_timer.a 00:04:27.783 [174/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:27.783 [175/745] Linking static target lib/librte_cmdline.a 00:04:27.783 [176/745] Generating lib/rte_distributor_mingw with a custom command 00:04:27.783 [177/745] Generating lib/rte_efd_def with a custom command 00:04:27.783 [178/745] Generating lib/rte_efd_mingw with a custom command 00:04:27.783 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:27.783 [180/745] Linking static target lib/librte_net.a 00:04:28.046 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:28.046 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:28.046 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:04:28.046 [184/745] Linking static target lib/librte_cfgfile.a 00:04:28.046 [185/745] Linking static target lib/librte_mempool.a 00:04:28.046 [186/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:04:28.046 [187/745] Linking static target lib/librte_metrics.a 00:04:28.333 [188/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.333 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:04:28.333 [190/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.333 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.333 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:28.333 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:28.333 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:04:28.333 [195/745] Linking static target lib/librte_eal.a 00:04:28.333 [196/745] Generating lib/rte_eventdev_def with a custom command 00:04:28.333 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:04:28.333 [198/745] Generating lib/rte_eventdev_mingw with a custom command 00:04:28.333 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:04:28.333 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:04:28.333 [201/745] Generating lib/rte_gpudev_def with a custom command 00:04:28.333 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:04:28.598 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:04:28.598 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:04:28.598 [205/745] Linking static target lib/librte_bitratestats.a 00:04:28.598 [206/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.598 [207/745] Generating lib/rte_gro_def with a custom command 00:04:28.598 [208/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:04:28.598 [209/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.598 [210/745] Generating lib/rte_gro_mingw with a custom command 00:04:28.598 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:04:28.598 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:04:28.859 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:28.859 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:04:28.859 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.859 [216/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:28.859 [217/745] Generating lib/rte_gso_def with a custom command 00:04:28.859 [218/745] Generating lib/rte_gso_mingw with a custom command 00:04:28.859 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:04:28.859 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:04:29.119 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:04:29.119 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:29.119 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:04:29.119 [224/745] Linking static target lib/librte_bbdev.a 00:04:29.119 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:04:29.119 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:04:29.119 [227/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:29.119 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.119 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.119 [230/745] Generating lib/rte_ip_frag_mingw with a custom command 00:04:29.119 [231/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:04:29.119 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:04:29.119 [233/745] Generating lib/rte_jobstats_def with a custom command 00:04:29.379 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:04:29.379 [235/745] Generating lib/rte_latencystats_def with a custom command 00:04:29.379 [236/745] Generating lib/rte_lpm_def with a custom command 00:04:29.379 [237/745] Generating lib/rte_lpm_mingw with a custom command 00:04:29.379 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:29.379 [239/745] Linking static target lib/librte_compressdev.a 00:04:29.379 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:04:29.379 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:04:29.379 [242/745] Linking static target lib/librte_jobstats.a 00:04:29.379 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:04:29.643 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:04:29.643 [245/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:29.643 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:04:29.643 [247/745] Linking static target lib/librte_distributor.a 00:04:29.643 [248/745] Generating lib/rte_member_def with a custom command 00:04:29.906 [249/745] Generating lib/rte_member_mingw with a custom command 00:04:29.906 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:04:29.906 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:04:29.906 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.906 [253/745] Generating lib/rte_pcapng_def with a custom command 00:04:29.906 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:04:29.906 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:04:29.906 [256/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.906 [257/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:04:30.167 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:04:30.167 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:04:30.167 [260/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:04:30.167 [261/745] Linking static target lib/librte_bpf.a 00:04:30.167 [262/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:04:30.167 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:04:30.167 [264/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:30.167 [265/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:30.167 [266/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:04:30.167 [267/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.167 [268/745] Generating lib/rte_power_def with a custom command 00:04:30.167 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:04:30.167 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:30.167 [271/745] Generating lib/rte_power_mingw with a custom command 00:04:30.167 [272/745] Linking static target lib/librte_gro.a 00:04:30.167 [273/745] Generating lib/rte_rawdev_def with a custom command 00:04:30.167 [274/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:04:30.167 [275/745] Linking static target lib/librte_gpudev.a 00:04:30.167 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:04:30.167 [277/745] Generating lib/rte_regexdev_def with a custom command 00:04:30.167 [278/745] Generating lib/rte_regexdev_mingw with a custom command 00:04:30.167 [279/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:04:30.167 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:30.167 [281/745] Generating lib/rte_dmadev_mingw with a custom command 00:04:30.167 [282/745] Generating lib/rte_dmadev_def with a custom command 00:04:30.441 [283/745] Generating lib/rte_rib_def with a custom command 00:04:30.441 [284/745] Generating lib/rte_rib_mingw with a custom command 00:04:30.441 [285/745] Generating lib/rte_reorder_def with a custom command 00:04:30.441 [286/745] Generating lib/rte_reorder_mingw with a custom command 00:04:30.441 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:04:30.441 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:04:30.441 [289/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.708 [290/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.708 [291/745] Generating lib/rte_sched_def with a custom command 00:04:30.708 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:04:30.708 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:04:30.708 [294/745] Generating lib/rte_sched_mingw with a custom command 00:04:30.708 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:04:30.708 [296/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:04:30.708 [297/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:04:30.708 [298/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:04:30.708 [299/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:04:30.708 [300/745] Generating lib/rte_security_def with a custom command 00:04:30.708 [301/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:04:30.708 [302/745] Linking static target lib/librte_latencystats.a 00:04:30.708 [303/745] Generating lib/rte_security_mingw with a custom command 00:04:30.708 [304/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:04:30.708 [305/745] Generating lib/rte_stack_mingw with a custom command 00:04:30.708 [306/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.708 [307/745] Generating lib/rte_stack_def with a custom command 00:04:30.708 [308/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:04:30.708 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:04:30.984 [310/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:04:30.984 [311/745] Linking static target lib/librte_rawdev.a 00:04:30.984 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:04:30.984 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:04:30.984 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:04:30.984 [315/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:04:30.984 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:04:30.984 [317/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:04:30.984 [318/745] Generating lib/rte_vhost_def with a custom command 00:04:30.984 [319/745] Linking static target lib/librte_stack.a 00:04:30.984 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:04:30.984 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:30.984 [322/745] Linking static target lib/librte_dmadev.a 00:04:30.984 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:30.984 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:04:30.984 [325/745] Linking static target lib/librte_ip_frag.a 00:04:31.243 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.243 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:04:31.243 [328/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.243 [329/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:04:31.243 [330/745] Generating lib/rte_ipsec_def with a custom command 00:04:31.243 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:04:31.243 [332/745] Generating lib/rte_ipsec_mingw with a custom command 00:04:31.507 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:04:31.507 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.507 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.507 [336/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.766 [337/745] Generating lib/rte_fib_def with a custom command 00:04:31.766 [338/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:04:31.766 [339/745] Generating lib/rte_fib_mingw with a custom command 00:04:31.766 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:31.766 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:04:31.766 [342/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:31.766 [343/745] Linking static target lib/librte_gso.a 00:04:31.766 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:04:31.766 [345/745] Linking static target lib/librte_regexdev.a 00:04:31.766 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.037 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:04:32.037 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.037 [349/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:04:32.037 [350/745] Linking static target lib/librte_pcapng.a 00:04:32.037 [351/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:04:32.037 [352/745] Linking static target lib/librte_efd.a 00:04:32.037 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:04:32.037 [354/745] Linking static target lib/librte_lpm.a 00:04:32.295 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:04:32.295 [356/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:04:32.295 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:32.295 [358/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:32.295 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:32.295 [360/745] Linking static target lib/librte_reorder.a 00:04:32.295 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:32.555 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:04:32.555 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.555 [364/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:32.556 [365/745] Generating lib/rte_port_def with a custom command 00:04:32.556 [366/745] Generating lib/rte_port_mingw with a custom command 00:04:32.556 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:04:32.556 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:04:32.556 [369/745] Generating lib/rte_pdump_def with a custom command 00:04:32.556 [370/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:04:32.556 [371/745] Generating lib/rte_pdump_mingw with a custom command 00:04:32.556 [372/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.556 [373/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:32.556 [374/745] Linking static target lib/librte_security.a 00:04:32.556 [375/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:04:32.556 [376/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:04:32.556 [377/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:04:32.556 [378/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:04:32.556 [379/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:32.556 [380/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:04:32.556 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.819 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:04:32.819 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.819 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:32.819 [385/745] Linking static target lib/librte_power.a 00:04:32.819 [386/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.819 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:04:32.819 [388/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:33.085 [389/745] Linking static target lib/librte_hash.a 00:04:33.085 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:04:33.085 [391/745] Linking static target lib/librte_rib.a 00:04:33.085 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:04:33.085 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:04:33.085 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:04:33.085 [395/745] Linking static target lib/librte_acl.a 00:04:33.085 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:04:33.085 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:04:33.344 [398/745] Generating lib/rte_table_def with a custom command 00:04:33.344 [399/745] Generating lib/rte_table_mingw with a custom command 00:04:33.344 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.607 [401/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:04:33.607 [402/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.873 [403/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.873 [404/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:33.873 [405/745] Linking static target lib/librte_ethdev.a 00:04:33.873 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:04:33.873 [407/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:33.873 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:04:33.873 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:33.873 [410/745] Generating lib/rte_pipeline_def with a custom command 00:04:33.873 [411/745] Linking static target lib/librte_mbuf.a 00:04:33.873 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:04:33.873 [413/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:04:33.873 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:04:33.873 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:04:33.873 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:04:33.873 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:04:34.136 [418/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:04:34.136 [419/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:04:34.136 [420/745] Generating lib/rte_graph_def with a custom command 00:04:34.136 [421/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:04:34.136 [422/745] Linking static target lib/librte_fib.a 00:04:34.136 [423/745] Generating lib/rte_graph_mingw with a custom command 00:04:34.136 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.136 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:04:34.136 [426/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:04:34.136 [427/745] Linking static target lib/librte_member.a 00:04:34.136 [428/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.398 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:04:34.398 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:04:34.398 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:04:34.398 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:04:34.398 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:04:34.398 [434/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:04:34.398 [435/745] Generating lib/rte_node_def with a custom command 00:04:34.398 [436/745] Generating lib/rte_node_mingw with a custom command 00:04:34.398 [437/745] Linking static target lib/librte_eventdev.a 00:04:34.398 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:04:34.398 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:04:34.661 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.661 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:34.661 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:04:34.661 [443/745] Generating drivers/rte_bus_pci_def with a custom command 00:04:34.661 [444/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:04:34.661 [445/745] Linking static target lib/librte_sched.a 00:04:34.661 [446/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:04:34.661 [447/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:34.661 [448/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.661 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:04:34.661 [450/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.661 [451/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:34.924 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:04:34.924 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:04:34.924 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:34.924 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:04:34.924 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:04:34.924 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:04:34.924 [458/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:04:34.924 [459/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:04:34.924 [460/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:34.924 [461/745] Linking static target lib/librte_cryptodev.a 00:04:34.924 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:35.182 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:35.182 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:04:35.182 [465/745] Linking static target lib/librte_pdump.a 00:04:35.182 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:04:35.182 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:04:35.182 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:04:35.182 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:35.182 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:04:35.182 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:35.182 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:04:35.182 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:04:35.445 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:35.445 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:04:35.445 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.445 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:04:35.445 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:04:35.445 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:04:35.445 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:04:35.445 [481/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:04:35.445 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:04:35.445 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.703 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:35.704 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.704 [486/745] Linking static target drivers/librte_bus_vdev.a 00:04:35.704 [487/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:04:35.704 [488/745] Linking static target lib/librte_table.a 00:04:35.704 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:04:35.704 [490/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:04:35.704 [491/745] Linking static target lib/librte_ipsec.a 00:04:35.704 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.964 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:35.964 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:35.964 [495/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:04:35.964 [496/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.228 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:04:36.228 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:04:36.228 [499/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.228 [500/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:04:36.228 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:04:36.228 [502/745] Linking static target lib/librte_graph.a 00:04:36.490 [503/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:04:36.490 [504/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:04:36.490 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:04:36.490 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:36.490 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:04:36.490 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.490 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.490 [510/745] Linking static target drivers/librte_bus_pci.a 00:04:36.490 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:04:36.491 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:04:36.750 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:04:36.750 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.014 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:04:37.014 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.275 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:04:37.275 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:04:37.275 [519/745] Linking static target lib/librte_port.a 00:04:37.275 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:04:37.275 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:04:37.537 [522/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:04:37.537 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:04:37.537 [524/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.537 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:37.537 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:37.798 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.798 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:04:37.798 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:04:37.798 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:37.798 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:04:37.798 [532/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:37.798 [533/745] Linking static target drivers/librte_mempool_ring.a 00:04:37.798 [534/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:38.061 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:04:38.061 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:04:38.061 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:04:38.062 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:04:38.322 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:04:38.322 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.322 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.587 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:04:38.587 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:04:38.849 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:04:38.849 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:04:38.849 [546/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:38.849 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:04:38.849 [548/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:04:38.849 [549/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:04:38.849 [550/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:04:39.108 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:04:39.108 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:04:39.367 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:39.367 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:04:39.367 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:04:39.632 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:04:39.632 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:39.632 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:04:39.889 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:04:40.159 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:40.159 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:40.159 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:40.159 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:40.159 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:40.159 [565/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:04:40.159 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:40.423 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:04:40.423 [568/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:04:40.423 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:40.423 [570/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:40.423 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:40.684 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:04:40.684 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:40.684 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:40.946 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:40.946 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:40.946 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:40.946 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:40.946 [579/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.946 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:40.946 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:40.946 [582/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:40.946 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:40.946 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:41.207 [585/745] Linking target lib/librte_eal.so.23.0 00:04:41.207 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:41.207 [587/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:04:41.465 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:04:41.465 [589/745] Linking target lib/librte_ring.so.23.0 00:04:41.727 [590/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:41.727 [591/745] Linking target lib/librte_meter.so.23.0 00:04:41.727 [592/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.727 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:04:41.727 [594/745] Linking target lib/librte_pci.so.23.0 00:04:41.986 [595/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:41.986 [596/745] Linking target lib/librte_rcu.so.23.0 00:04:41.986 [597/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:41.986 [598/745] Linking target lib/librte_mempool.so.23.0 00:04:41.986 [599/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:04:41.986 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:41.986 [601/745] Linking target lib/librte_timer.so.23.0 00:04:41.986 [602/745] Linking target lib/librte_acl.so.23.0 00:04:41.986 [603/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:41.986 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:04:41.986 [605/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:41.986 [606/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:41.986 [607/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:41.986 [608/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:04:41.986 [609/745] Linking target lib/librte_jobstats.so.23.0 00:04:41.986 [610/745] Linking target lib/librte_rawdev.so.23.0 00:04:41.986 [611/745] Linking target lib/librte_dmadev.so.23.0 00:04:42.248 [612/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:04:42.248 [613/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:04:42.248 [614/745] Linking target lib/librte_stack.so.23.0 00:04:42.248 [615/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:42.248 [616/745] Linking target lib/librte_graph.so.23.0 00:04:42.248 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:04:42.248 [618/745] Linking target drivers/librte_bus_vdev.so.23.0 00:04:42.248 [619/745] Linking target drivers/librte_mempool_ring.so.23.0 00:04:42.248 [620/745] Linking target lib/librte_rib.so.23.0 00:04:42.248 [621/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:04:42.248 [622/745] Linking target lib/librte_mbuf.so.23.0 00:04:42.248 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:04:42.248 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:42.248 [625/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:42.248 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:04:42.511 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:04:42.511 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:04:42.511 [629/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:04:42.511 [630/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:42.511 [631/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:04:42.511 [632/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:04:42.511 [633/745] Linking target lib/librte_bbdev.so.23.0 00:04:42.511 [634/745] Linking target lib/librte_distributor.so.23.0 00:04:42.511 [635/745] Linking target lib/librte_reorder.so.23.0 00:04:42.511 [636/745] Linking target lib/librte_compressdev.so.23.0 00:04:42.511 [637/745] Linking target lib/librte_gpudev.so.23.0 00:04:42.511 [638/745] Linking target lib/librte_net.so.23.0 00:04:42.511 [639/745] Linking target lib/librte_sched.so.23.0 00:04:42.511 [640/745] Linking target lib/librte_cryptodev.so.23.0 00:04:42.511 [641/745] Linking target lib/librte_regexdev.so.23.0 00:04:42.511 [642/745] Linking target lib/librte_fib.so.23.0 00:04:42.511 [643/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:04:42.511 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:42.511 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:42.511 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:42.511 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:42.768 [648/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:04:42.768 [649/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:04:42.768 [650/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:04:42.768 [651/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:42.768 [652/745] Linking target lib/librte_security.so.23.0 00:04:42.768 [653/745] Linking target lib/librte_hash.so.23.0 00:04:42.768 [654/745] Linking target lib/librte_cmdline.so.23.0 00:04:42.768 [655/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:42.768 [656/745] Linking target lib/librte_ethdev.so.23.0 00:04:42.768 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:42.768 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:42.768 [659/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:04:42.768 [660/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:04:42.768 [661/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:42.768 [662/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:04:42.768 [663/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:42.768 [664/745] Linking target lib/librte_efd.so.23.0 00:04:43.026 [665/745] Linking target lib/librte_lpm.so.23.0 00:04:43.026 [666/745] Linking target lib/librte_ipsec.so.23.0 00:04:43.026 [667/745] Linking target lib/librte_pcapng.so.23.0 00:04:43.026 [668/745] Linking target lib/librte_member.so.23.0 00:04:43.026 [669/745] Linking target lib/librte_gro.so.23.0 00:04:43.026 [670/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:43.026 [671/745] Linking target lib/librte_gso.so.23.0 00:04:43.026 [672/745] Linking target lib/librte_metrics.so.23.0 00:04:43.026 [673/745] Linking target lib/librte_ip_frag.so.23.0 00:04:43.026 [674/745] Linking target lib/librte_bpf.so.23.0 00:04:43.026 [675/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:43.026 [676/745] Linking target lib/librte_power.so.23.0 00:04:43.026 [677/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:43.026 [678/745] Linking target lib/librte_eventdev.so.23.0 00:04:43.026 [679/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:43.026 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:04:43.026 [681/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:04:43.026 [682/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:04:43.026 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:04:43.026 [684/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:04:43.026 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:04:43.026 [686/745] Linking target lib/librte_latencystats.so.23.0 00:04:43.026 [687/745] Linking target lib/librte_bitratestats.so.23.0 00:04:43.026 [688/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:43.026 [689/745] Linking target lib/librte_pdump.so.23.0 00:04:43.283 [690/745] Linking target lib/librte_port.so.23.0 00:04:43.283 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:04:43.283 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:43.283 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:43.283 [694/745] Linking target lib/librte_table.so.23.0 00:04:43.549 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:04:43.549 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:43.811 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:43.811 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:44.068 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:44.068 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:44.068 [701/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:44.324 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:44.581 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:44.581 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:44.581 [705/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:44.581 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:44.581 [707/745] Linking static target drivers/librte_net_i40e.a 00:04:44.838 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:45.096 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:45.096 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.096 [711/745] Linking target drivers/librte_net_i40e.so.23.0 00:04:45.354 [712/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:46.284 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:04:46.284 [714/745] Linking static target lib/librte_node.a 00:04:46.284 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.540 [716/745] Linking target lib/librte_node.so.23.0 00:04:46.540 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:47.103 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:48.034 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:56.131 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:28.178 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:28.178 [722/745] Linking static target lib/librte_vhost.a 00:05:28.178 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.178 [724/745] Linking target lib/librte_vhost.so.23.0 00:05:46.295 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:05:46.295 [726/745] Linking static target lib/librte_pipeline.a 00:05:46.295 [727/745] Linking target app/dpdk-dumpcap 00:05:46.295 [728/745] Linking target app/dpdk-test-bbdev 00:05:46.295 [729/745] Linking target app/dpdk-test-pipeline 00:05:46.295 [730/745] Linking target app/dpdk-test-security-perf 00:05:46.295 [731/745] Linking target app/dpdk-test-regex 00:05:46.295 [732/745] Linking target app/dpdk-test-compress-perf 00:05:46.295 [733/745] Linking target app/dpdk-test-eventdev 00:05:46.295 [734/745] Linking target app/dpdk-test-cmdline 00:05:46.295 [735/745] Linking target app/dpdk-test-flow-perf 00:05:46.295 [736/745] Linking target app/dpdk-pdump 00:05:46.295 [737/745] Linking target app/dpdk-test-sad 00:05:46.295 [738/745] Linking target app/dpdk-test-acl 00:05:46.295 [739/745] Linking target app/dpdk-proc-info 00:05:46.295 [740/745] Linking target app/dpdk-test-gpudev 00:05:46.295 [741/745] Linking target app/dpdk-test-fib 00:05:46.295 [742/745] Linking target app/dpdk-test-crypto-perf 00:05:46.295 [743/745] Linking target app/dpdk-testpmd 00:05:46.295 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.295 [745/745] Linking target lib/librte_pipeline.so.23.0 00:05:46.295 08:01:55 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:05:46.295 08:01:55 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:05:46.295 08:01:55 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:05:46.295 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:05:46.295 [0/1] Installing files. 00:05:46.554 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.554 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:46.558 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.559 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:05:46.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:05:46.560 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.560 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:46.818 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:05:47.080 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:05:47.080 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:05:47.080 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.080 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:05:47.080 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.080 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:05:47.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:05:47.083 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:05:47.083 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:05:47.083 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:05:47.083 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:05:47.083 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:05:47.083 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:05:47.083 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:05:47.083 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:05:47.083 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:05:47.083 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:05:47.083 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:05:47.084 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:05:47.084 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:05:47.084 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:05:47.084 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:05:47.084 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:05:47.084 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:05:47.084 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:05:47.084 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:05:47.084 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:05:47.084 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:05:47.084 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:05:47.084 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:05:47.084 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:05:47.084 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:05:47.084 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:05:47.084 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:05:47.084 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:05:47.084 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:05:47.084 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:05:47.084 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:05:47.084 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:05:47.084 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:05:47.084 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:05:47.084 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:05:47.084 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:05:47.084 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:05:47.084 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:05:47.084 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:05:47.084 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:05:47.084 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:05:47.084 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:05:47.084 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:05:47.084 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:05:47.084 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:05:47.084 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:05:47.084 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:05:47.084 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:05:47.084 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:05:47.084 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:05:47.084 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:05:47.084 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:05:47.084 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:05:47.084 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:05:47.084 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:05:47.084 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:05:47.084 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:05:47.084 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:05:47.084 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:05:47.084 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:05:47.084 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:05:47.084 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:05:47.084 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:05:47.084 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:05:47.084 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:05:47.084 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:05:47.084 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:05:47.084 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:05:47.084 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:05:47.084 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:05:47.084 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:05:47.084 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:05:47.084 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:05:47.084 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:05:47.084 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:05:47.084 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:05:47.084 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:05:47.084 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:05:47.084 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:05:47.084 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:05:47.084 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:05:47.084 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:05:47.084 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:05:47.084 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:05:47.084 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:05:47.084 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:05:47.084 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:05:47.084 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:05:47.084 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:05:47.084 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:05:47.084 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:05:47.084 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:05:47.084 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:05:47.084 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:05:47.085 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:05:47.085 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:05:47.085 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:05:47.085 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:05:47.085 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:05:47.085 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:05:47.085 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:05:47.085 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:05:47.085 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:05:47.085 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:05:47.085 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:05:47.085 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:47.085 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:05:47.085 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:47.342 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:05:47.342 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:05:47.342 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:05:47.342 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:05:47.342 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:05:47.342 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:05:47.342 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:05:47.342 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:05:47.342 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:05:47.342 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:05:47.342 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:05:47.342 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:05:47.342 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:05:47.342 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:47.342 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:05:47.342 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:47.342 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:05:47.342 08:01:56 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:05:47.342 08:01:56 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.342 00:05:47.342 real 1m26.919s 00:05:47.342 user 14m26.755s 00:05:47.342 sys 1m48.247s 00:05:47.342 08:01:56 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:05:47.342 08:01:56 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:05:47.342 ************************************ 00:05:47.342 END TEST build_native_dpdk 00:05:47.342 ************************************ 00:05:47.342 08:01:56 -- common/autotest_common.sh@1142 -- $ return 0 00:05:47.342 08:01:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:47.342 08:01:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:47.342 08:01:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:05:47.342 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:05:47.342 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:47.342 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:47.342 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:47.905 Using 'verbs' RDMA provider 00:05:58.431 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:06.552 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:06.809 Creating mk/config.mk...done. 00:06:06.809 Creating mk/cc.flags.mk...done. 00:06:06.809 Type 'make' to build. 00:06:06.809 08:02:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:06:06.809 08:02:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:06:06.809 08:02:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:06:06.809 08:02:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:06.809 ************************************ 00:06:06.809 START TEST make 00:06:06.809 ************************************ 00:06:06.809 08:02:16 make -- common/autotest_common.sh@1123 -- $ make -j48 00:06:07.065 make[1]: Nothing to be done for 'all'. 00:06:08.448 The Meson build system 00:06:08.448 Version: 1.3.1 00:06:08.448 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:08.448 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:08.448 Build type: native build 00:06:08.448 Project name: libvfio-user 00:06:08.448 Project version: 0.0.1 00:06:08.448 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:06:08.448 C linker for the host machine: gcc ld.bfd 2.39-16 00:06:08.448 Host machine cpu family: x86_64 00:06:08.448 Host machine cpu: x86_64 00:06:08.448 Run-time dependency threads found: YES 00:06:08.448 Library dl found: YES 00:06:08.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:06:08.448 Run-time dependency json-c found: YES 0.17 00:06:08.448 Run-time dependency cmocka found: YES 1.1.7 00:06:08.448 Program pytest-3 found: NO 00:06:08.448 Program flake8 found: NO 00:06:08.448 Program misspell-fixer found: NO 00:06:08.448 Program restructuredtext-lint found: NO 00:06:08.448 Program valgrind found: YES (/usr/bin/valgrind) 00:06:08.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:08.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:08.448 Compiler for C supports arguments -Wwrite-strings: YES 00:06:08.448 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:08.448 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:08.448 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:08.448 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:08.448 Build targets in project: 8 00:06:08.448 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:08.448 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:08.448 00:06:08.448 libvfio-user 0.0.1 00:06:08.448 00:06:08.448 User defined options 00:06:08.448 buildtype : debug 00:06:08.448 default_library: shared 00:06:08.448 libdir : /usr/local/lib 00:06:08.448 00:06:08.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:09.389 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:09.389 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:09.389 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:09.650 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:09.650 [4/37] Compiling C object samples/null.p/null.c.o 00:06:09.650 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:09.650 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:09.650 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:09.650 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:09.651 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:09.651 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:09.651 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:09.651 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:09.651 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:09.651 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:09.651 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:09.651 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:09.651 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:09.651 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:09.651 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:09.651 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:09.651 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:09.651 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:09.651 [23/37] Compiling C object samples/client.p/client.c.o 00:06:09.651 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:09.651 [25/37] Compiling C object samples/server.p/server.c.o 00:06:09.651 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:09.651 [27/37] Linking target samples/client 00:06:09.911 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:09.911 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:06:09.911 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:10.172 [31/37] Linking target test/unit_tests 00:06:10.172 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:10.172 [33/37] Linking target samples/server 00:06:10.172 [34/37] Linking target samples/lspci 00:06:10.172 [35/37] Linking target samples/gpio-pci-idio-16 00:06:10.172 [36/37] Linking target samples/null 00:06:10.172 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:10.172 INFO: autodetecting backend as ninja 00:06:10.172 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:10.172 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:11.112 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:11.112 ninja: no work to do. 00:06:23.296 CC lib/log/log.o 00:06:23.296 CC lib/log/log_flags.o 00:06:23.296 CC lib/log/log_deprecated.o 00:06:23.296 CC lib/ut_mock/mock.o 00:06:23.296 CC lib/ut/ut.o 00:06:23.296 LIB libspdk_log.a 00:06:23.296 LIB libspdk_ut.a 00:06:23.296 LIB libspdk_ut_mock.a 00:06:23.296 SO libspdk_log.so.7.0 00:06:23.296 SO libspdk_ut.so.2.0 00:06:23.296 SO libspdk_ut_mock.so.6.0 00:06:23.296 SYMLINK libspdk_ut_mock.so 00:06:23.296 SYMLINK libspdk_ut.so 00:06:23.296 SYMLINK libspdk_log.so 00:06:23.296 CXX lib/trace_parser/trace.o 00:06:23.296 CC lib/dma/dma.o 00:06:23.296 CC lib/util/base64.o 00:06:23.296 CC lib/util/bit_array.o 00:06:23.296 CC lib/ioat/ioat.o 00:06:23.296 CC lib/util/cpuset.o 00:06:23.296 CC lib/util/crc16.o 00:06:23.296 CC lib/util/crc32.o 00:06:23.296 CC lib/util/crc32c.o 00:06:23.296 CC lib/util/crc32_ieee.o 00:06:23.296 CC lib/util/crc64.o 00:06:23.296 CC lib/util/dif.o 00:06:23.296 CC lib/util/fd.o 00:06:23.296 CC lib/util/fd_group.o 00:06:23.296 CC lib/util/file.o 00:06:23.296 CC lib/util/hexlify.o 00:06:23.296 CC lib/util/iov.o 00:06:23.296 CC lib/util/math.o 00:06:23.296 CC lib/util/net.o 00:06:23.296 CC lib/util/pipe.o 00:06:23.296 CC lib/util/strerror_tls.o 00:06:23.296 CC lib/util/string.o 00:06:23.296 CC lib/util/uuid.o 00:06:23.296 CC lib/util/xor.o 00:06:23.296 CC lib/util/zipf.o 00:06:23.296 CC lib/vfio_user/host/vfio_user_pci.o 00:06:23.296 CC lib/vfio_user/host/vfio_user.o 00:06:23.296 LIB libspdk_dma.a 00:06:23.296 SO libspdk_dma.so.4.0 00:06:23.296 SYMLINK libspdk_dma.so 00:06:23.296 LIB libspdk_ioat.a 00:06:23.296 SO libspdk_ioat.so.7.0 00:06:23.296 SYMLINK libspdk_ioat.so 00:06:23.296 LIB libspdk_vfio_user.a 00:06:23.296 SO libspdk_vfio_user.so.5.0 00:06:23.296 SYMLINK libspdk_vfio_user.so 00:06:23.296 LIB libspdk_util.a 00:06:23.296 SO libspdk_util.so.10.0 00:06:23.553 SYMLINK libspdk_util.so 00:06:23.553 CC lib/json/json_parse.o 00:06:23.553 CC lib/conf/conf.o 00:06:23.553 CC lib/json/json_util.o 00:06:23.553 CC lib/env_dpdk/env.o 00:06:23.553 CC lib/json/json_write.o 00:06:23.553 CC lib/env_dpdk/memory.o 00:06:23.553 CC lib/rdma_provider/common.o 00:06:23.553 CC lib/env_dpdk/pci.o 00:06:23.553 CC lib/idxd/idxd.o 00:06:23.553 CC lib/env_dpdk/init.o 00:06:23.553 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:23.553 CC lib/env_dpdk/threads.o 00:06:23.553 CC lib/vmd/vmd.o 00:06:23.553 CC lib/env_dpdk/pci_ioat.o 00:06:23.553 CC lib/rdma_utils/rdma_utils.o 00:06:23.553 CC lib/idxd/idxd_kernel.o 00:06:23.553 CC lib/vmd/led.o 00:06:23.553 CC lib/env_dpdk/pci_virtio.o 00:06:23.553 CC lib/idxd/idxd_user.o 00:06:23.553 CC lib/env_dpdk/pci_vmd.o 00:06:23.553 CC lib/env_dpdk/pci_idxd.o 00:06:23.553 CC lib/env_dpdk/pci_event.o 00:06:23.553 CC lib/env_dpdk/sigbus_handler.o 00:06:23.553 CC lib/env_dpdk/pci_dpdk.o 00:06:23.553 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:23.553 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:23.810 LIB libspdk_trace_parser.a 00:06:23.810 SO libspdk_trace_parser.so.5.0 00:06:23.810 LIB libspdk_conf.a 00:06:23.810 SYMLINK libspdk_trace_parser.so 00:06:23.810 LIB libspdk_rdma_provider.a 00:06:23.810 SO libspdk_conf.so.6.0 00:06:23.810 SO libspdk_rdma_provider.so.6.0 00:06:24.066 LIB libspdk_rdma_utils.a 00:06:24.066 LIB libspdk_json.a 00:06:24.066 SYMLINK libspdk_conf.so 00:06:24.066 SO libspdk_rdma_utils.so.1.0 00:06:24.066 SYMLINK libspdk_rdma_provider.so 00:06:24.066 SO libspdk_json.so.6.0 00:06:24.066 SYMLINK libspdk_rdma_utils.so 00:06:24.066 SYMLINK libspdk_json.so 00:06:24.324 CC lib/jsonrpc/jsonrpc_server.o 00:06:24.324 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:24.324 CC lib/jsonrpc/jsonrpc_client.o 00:06:24.324 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:24.324 LIB libspdk_idxd.a 00:06:24.324 LIB libspdk_vmd.a 00:06:24.324 SO libspdk_idxd.so.12.0 00:06:24.324 SO libspdk_vmd.so.6.0 00:06:24.324 SYMLINK libspdk_idxd.so 00:06:24.324 SYMLINK libspdk_vmd.so 00:06:24.581 LIB libspdk_jsonrpc.a 00:06:24.581 SO libspdk_jsonrpc.so.6.0 00:06:24.581 SYMLINK libspdk_jsonrpc.so 00:06:24.838 CC lib/rpc/rpc.o 00:06:24.838 LIB libspdk_rpc.a 00:06:25.096 SO libspdk_rpc.so.6.0 00:06:25.096 SYMLINK libspdk_rpc.so 00:06:25.096 CC lib/keyring/keyring.o 00:06:25.096 CC lib/notify/notify.o 00:06:25.096 CC lib/keyring/keyring_rpc.o 00:06:25.096 CC lib/notify/notify_rpc.o 00:06:25.096 CC lib/trace/trace.o 00:06:25.096 CC lib/trace/trace_flags.o 00:06:25.096 CC lib/trace/trace_rpc.o 00:06:25.356 LIB libspdk_notify.a 00:06:25.356 SO libspdk_notify.so.6.0 00:06:25.356 LIB libspdk_keyring.a 00:06:25.356 SYMLINK libspdk_notify.so 00:06:25.356 LIB libspdk_trace.a 00:06:25.356 SO libspdk_keyring.so.1.0 00:06:25.356 SO libspdk_trace.so.10.0 00:06:25.614 SYMLINK libspdk_keyring.so 00:06:25.614 SYMLINK libspdk_trace.so 00:06:25.614 LIB libspdk_env_dpdk.a 00:06:25.614 CC lib/thread/thread.o 00:06:25.614 CC lib/thread/iobuf.o 00:06:25.614 CC lib/sock/sock.o 00:06:25.614 CC lib/sock/sock_rpc.o 00:06:25.614 SO libspdk_env_dpdk.so.14.1 00:06:25.871 SYMLINK libspdk_env_dpdk.so 00:06:26.129 LIB libspdk_sock.a 00:06:26.129 SO libspdk_sock.so.10.0 00:06:26.129 SYMLINK libspdk_sock.so 00:06:26.387 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:26.387 CC lib/nvme/nvme_ctrlr.o 00:06:26.387 CC lib/nvme/nvme_fabric.o 00:06:26.387 CC lib/nvme/nvme_ns_cmd.o 00:06:26.387 CC lib/nvme/nvme_ns.o 00:06:26.387 CC lib/nvme/nvme_pcie_common.o 00:06:26.387 CC lib/nvme/nvme_pcie.o 00:06:26.387 CC lib/nvme/nvme_qpair.o 00:06:26.387 CC lib/nvme/nvme.o 00:06:26.387 CC lib/nvme/nvme_quirks.o 00:06:26.387 CC lib/nvme/nvme_transport.o 00:06:26.387 CC lib/nvme/nvme_discovery.o 00:06:26.387 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:26.387 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:26.387 CC lib/nvme/nvme_tcp.o 00:06:26.387 CC lib/nvme/nvme_opal.o 00:06:26.387 CC lib/nvme/nvme_io_msg.o 00:06:26.387 CC lib/nvme/nvme_poll_group.o 00:06:26.387 CC lib/nvme/nvme_zns.o 00:06:26.387 CC lib/nvme/nvme_stubs.o 00:06:26.387 CC lib/nvme/nvme_auth.o 00:06:26.387 CC lib/nvme/nvme_cuse.o 00:06:26.387 CC lib/nvme/nvme_vfio_user.o 00:06:26.387 CC lib/nvme/nvme_rdma.o 00:06:27.317 LIB libspdk_thread.a 00:06:27.317 SO libspdk_thread.so.10.1 00:06:27.317 SYMLINK libspdk_thread.so 00:06:27.574 CC lib/blob/blobstore.o 00:06:27.574 CC lib/vfu_tgt/tgt_endpoint.o 00:06:27.574 CC lib/virtio/virtio.o 00:06:27.574 CC lib/accel/accel.o 00:06:27.574 CC lib/init/json_config.o 00:06:27.574 CC lib/blob/request.o 00:06:27.574 CC lib/accel/accel_rpc.o 00:06:27.574 CC lib/virtio/virtio_vhost_user.o 00:06:27.574 CC lib/init/subsystem.o 00:06:27.574 CC lib/vfu_tgt/tgt_rpc.o 00:06:27.574 CC lib/blob/zeroes.o 00:06:27.574 CC lib/accel/accel_sw.o 00:06:27.574 CC lib/virtio/virtio_vfio_user.o 00:06:27.574 CC lib/init/subsystem_rpc.o 00:06:27.574 CC lib/blob/blob_bs_dev.o 00:06:27.574 CC lib/virtio/virtio_pci.o 00:06:27.574 CC lib/init/rpc.o 00:06:27.831 LIB libspdk_init.a 00:06:27.831 SO libspdk_init.so.5.0 00:06:27.831 LIB libspdk_virtio.a 00:06:28.088 LIB libspdk_vfu_tgt.a 00:06:28.088 SYMLINK libspdk_init.so 00:06:28.088 SO libspdk_vfu_tgt.so.3.0 00:06:28.088 SO libspdk_virtio.so.7.0 00:06:28.088 SYMLINK libspdk_vfu_tgt.so 00:06:28.088 SYMLINK libspdk_virtio.so 00:06:28.088 CC lib/event/app.o 00:06:28.088 CC lib/event/reactor.o 00:06:28.088 CC lib/event/log_rpc.o 00:06:28.088 CC lib/event/app_rpc.o 00:06:28.088 CC lib/event/scheduler_static.o 00:06:28.650 LIB libspdk_event.a 00:06:28.650 SO libspdk_event.so.14.0 00:06:28.650 LIB libspdk_accel.a 00:06:28.650 SYMLINK libspdk_event.so 00:06:28.650 SO libspdk_accel.so.16.0 00:06:28.650 LIB libspdk_nvme.a 00:06:28.650 SYMLINK libspdk_accel.so 00:06:28.906 SO libspdk_nvme.so.13.1 00:06:28.906 CC lib/bdev/bdev.o 00:06:28.906 CC lib/bdev/bdev_rpc.o 00:06:28.906 CC lib/bdev/bdev_zone.o 00:06:28.906 CC lib/bdev/part.o 00:06:28.906 CC lib/bdev/scsi_nvme.o 00:06:29.164 SYMLINK libspdk_nvme.so 00:06:30.534 LIB libspdk_blob.a 00:06:30.534 SO libspdk_blob.so.11.0 00:06:30.791 SYMLINK libspdk_blob.so 00:06:30.792 CC lib/lvol/lvol.o 00:06:30.792 CC lib/blobfs/blobfs.o 00:06:30.792 CC lib/blobfs/tree.o 00:06:31.354 LIB libspdk_bdev.a 00:06:31.611 SO libspdk_bdev.so.16.0 00:06:31.611 SYMLINK libspdk_bdev.so 00:06:31.611 LIB libspdk_lvol.a 00:06:31.611 LIB libspdk_blobfs.a 00:06:31.611 SO libspdk_lvol.so.10.0 00:06:31.880 SO libspdk_blobfs.so.10.0 00:06:31.880 CC lib/ublk/ublk.o 00:06:31.880 CC lib/nvmf/ctrlr.o 00:06:31.880 CC lib/scsi/dev.o 00:06:31.880 CC lib/nbd/nbd.o 00:06:31.880 CC lib/scsi/lun.o 00:06:31.880 CC lib/nvmf/ctrlr_discovery.o 00:06:31.880 CC lib/ublk/ublk_rpc.o 00:06:31.880 CC lib/nbd/nbd_rpc.o 00:06:31.880 CC lib/nvmf/ctrlr_bdev.o 00:06:31.880 CC lib/scsi/port.o 00:06:31.880 CC lib/ftl/ftl_core.o 00:06:31.880 CC lib/scsi/scsi.o 00:06:31.880 CC lib/nvmf/subsystem.o 00:06:31.880 CC lib/ftl/ftl_init.o 00:06:31.880 CC lib/scsi/scsi_bdev.o 00:06:31.880 CC lib/ftl/ftl_layout.o 00:06:31.880 CC lib/scsi/scsi_pr.o 00:06:31.880 CC lib/nvmf/nvmf.o 00:06:31.880 CC lib/ftl/ftl_debug.o 00:06:31.880 CC lib/scsi/scsi_rpc.o 00:06:31.880 CC lib/nvmf/nvmf_rpc.o 00:06:31.880 CC lib/scsi/task.o 00:06:31.880 CC lib/nvmf/transport.o 00:06:31.880 CC lib/nvmf/tcp.o 00:06:31.880 CC lib/ftl/ftl_io.o 00:06:31.880 CC lib/nvmf/stubs.o 00:06:31.880 CC lib/ftl/ftl_sb.o 00:06:31.880 CC lib/nvmf/mdns_server.o 00:06:31.880 CC lib/ftl/ftl_l2p.o 00:06:31.880 CC lib/nvmf/vfio_user.o 00:06:31.880 CC lib/ftl/ftl_l2p_flat.o 00:06:31.880 CC lib/nvmf/rdma.o 00:06:31.880 CC lib/ftl/ftl_nv_cache.o 00:06:31.880 CC lib/nvmf/auth.o 00:06:31.880 CC lib/ftl/ftl_band.o 00:06:31.880 CC lib/ftl/ftl_band_ops.o 00:06:31.880 CC lib/ftl/ftl_writer.o 00:06:31.880 CC lib/ftl/ftl_rq.o 00:06:31.880 CC lib/ftl/ftl_reloc.o 00:06:31.880 CC lib/ftl/ftl_l2p_cache.o 00:06:31.880 CC lib/ftl/ftl_p2l.o 00:06:31.880 CC lib/ftl/mngt/ftl_mngt.o 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:31.880 SYMLINK libspdk_lvol.so 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:31.880 SYMLINK libspdk_blobfs.so 00:06:31.880 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:32.139 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:32.139 CC lib/ftl/utils/ftl_conf.o 00:06:32.139 CC lib/ftl/utils/ftl_md.o 00:06:32.139 CC lib/ftl/utils/ftl_mempool.o 00:06:32.139 CC lib/ftl/utils/ftl_bitmap.o 00:06:32.139 CC lib/ftl/utils/ftl_property.o 00:06:32.139 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:32.139 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:32.398 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:32.398 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:32.398 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:32.398 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:32.398 CC lib/ftl/base/ftl_base_dev.o 00:06:32.398 CC lib/ftl/base/ftl_base_bdev.o 00:06:32.398 CC lib/ftl/ftl_trace.o 00:06:32.655 LIB libspdk_nbd.a 00:06:32.655 SO libspdk_nbd.so.7.0 00:06:32.655 SYMLINK libspdk_nbd.so 00:06:32.655 LIB libspdk_scsi.a 00:06:32.655 SO libspdk_scsi.so.9.0 00:06:32.912 LIB libspdk_ublk.a 00:06:32.912 SO libspdk_ublk.so.3.0 00:06:32.912 SYMLINK libspdk_scsi.so 00:06:32.912 SYMLINK libspdk_ublk.so 00:06:32.912 CC lib/vhost/vhost.o 00:06:32.912 CC lib/iscsi/conn.o 00:06:32.912 CC lib/vhost/vhost_rpc.o 00:06:32.912 CC lib/iscsi/init_grp.o 00:06:32.912 CC lib/vhost/vhost_scsi.o 00:06:32.912 CC lib/iscsi/iscsi.o 00:06:32.912 CC lib/vhost/vhost_blk.o 00:06:32.912 CC lib/iscsi/md5.o 00:06:32.912 CC lib/vhost/rte_vhost_user.o 00:06:32.912 CC lib/iscsi/param.o 00:06:32.912 CC lib/iscsi/tgt_node.o 00:06:32.912 CC lib/iscsi/portal_grp.o 00:06:32.912 CC lib/iscsi/iscsi_subsystem.o 00:06:32.912 CC lib/iscsi/iscsi_rpc.o 00:06:32.912 CC lib/iscsi/task.o 00:06:33.169 LIB libspdk_ftl.a 00:06:33.427 SO libspdk_ftl.so.9.0 00:06:33.698 SYMLINK libspdk_ftl.so 00:06:34.261 LIB libspdk_vhost.a 00:06:34.261 SO libspdk_vhost.so.8.0 00:06:34.261 LIB libspdk_nvmf.a 00:06:34.261 SYMLINK libspdk_vhost.so 00:06:34.517 SO libspdk_nvmf.so.19.0 00:06:34.517 LIB libspdk_iscsi.a 00:06:34.517 SO libspdk_iscsi.so.8.0 00:06:34.517 SYMLINK libspdk_nvmf.so 00:06:34.774 SYMLINK libspdk_iscsi.so 00:06:35.045 CC module/vfu_device/vfu_virtio.o 00:06:35.045 CC module/env_dpdk/env_dpdk_rpc.o 00:06:35.045 CC module/vfu_device/vfu_virtio_blk.o 00:06:35.045 CC module/vfu_device/vfu_virtio_scsi.o 00:06:35.045 CC module/vfu_device/vfu_virtio_rpc.o 00:06:35.045 CC module/accel/ioat/accel_ioat.o 00:06:35.045 CC module/accel/ioat/accel_ioat_rpc.o 00:06:35.045 CC module/accel/dsa/accel_dsa.o 00:06:35.045 CC module/accel/error/accel_error.o 00:06:35.045 CC module/accel/error/accel_error_rpc.o 00:06:35.045 CC module/sock/posix/posix.o 00:06:35.045 CC module/keyring/file/keyring.o 00:06:35.045 CC module/accel/dsa/accel_dsa_rpc.o 00:06:35.045 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:35.045 CC module/keyring/file/keyring_rpc.o 00:06:35.045 CC module/scheduler/gscheduler/gscheduler.o 00:06:35.045 CC module/accel/iaa/accel_iaa.o 00:06:35.045 CC module/blob/bdev/blob_bdev.o 00:06:35.045 CC module/accel/iaa/accel_iaa_rpc.o 00:06:35.045 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:35.045 CC module/keyring/linux/keyring.o 00:06:35.045 CC module/keyring/linux/keyring_rpc.o 00:06:35.045 LIB libspdk_env_dpdk_rpc.a 00:06:35.045 SO libspdk_env_dpdk_rpc.so.6.0 00:06:35.303 SYMLINK libspdk_env_dpdk_rpc.so 00:06:35.303 LIB libspdk_keyring_file.a 00:06:35.303 LIB libspdk_keyring_linux.a 00:06:35.303 LIB libspdk_scheduler_gscheduler.a 00:06:35.303 LIB libspdk_scheduler_dpdk_governor.a 00:06:35.303 SO libspdk_keyring_file.so.1.0 00:06:35.303 SO libspdk_scheduler_gscheduler.so.4.0 00:06:35.303 SO libspdk_keyring_linux.so.1.0 00:06:35.303 LIB libspdk_accel_ioat.a 00:06:35.303 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:35.303 LIB libspdk_accel_error.a 00:06:35.303 LIB libspdk_scheduler_dynamic.a 00:06:35.303 LIB libspdk_accel_iaa.a 00:06:35.303 SO libspdk_accel_ioat.so.6.0 00:06:35.303 SO libspdk_accel_error.so.2.0 00:06:35.303 SYMLINK libspdk_scheduler_gscheduler.so 00:06:35.303 SYMLINK libspdk_keyring_file.so 00:06:35.303 SYMLINK libspdk_keyring_linux.so 00:06:35.303 SO libspdk_scheduler_dynamic.so.4.0 00:06:35.303 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:35.303 SO libspdk_accel_iaa.so.3.0 00:06:35.303 LIB libspdk_accel_dsa.a 00:06:35.303 SYMLINK libspdk_accel_ioat.so 00:06:35.303 SYMLINK libspdk_accel_error.so 00:06:35.303 LIB libspdk_blob_bdev.a 00:06:35.303 SYMLINK libspdk_scheduler_dynamic.so 00:06:35.303 SYMLINK libspdk_accel_iaa.so 00:06:35.303 SO libspdk_accel_dsa.so.5.0 00:06:35.303 SO libspdk_blob_bdev.so.11.0 00:06:35.303 SYMLINK libspdk_blob_bdev.so 00:06:35.560 SYMLINK libspdk_accel_dsa.so 00:06:35.560 LIB libspdk_vfu_device.a 00:06:35.560 SO libspdk_vfu_device.so.3.0 00:06:35.560 CC module/bdev/gpt/gpt.o 00:06:35.560 CC module/bdev/delay/vbdev_delay.o 00:06:35.560 CC module/bdev/error/vbdev_error.o 00:06:35.560 CC module/bdev/lvol/vbdev_lvol.o 00:06:35.560 CC module/bdev/null/bdev_null.o 00:06:35.560 CC module/bdev/gpt/vbdev_gpt.o 00:06:35.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:35.560 CC module/bdev/error/vbdev_error_rpc.o 00:06:35.560 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:35.560 CC module/bdev/null/bdev_null_rpc.o 00:06:35.560 CC module/bdev/split/vbdev_split.o 00:06:35.560 CC module/bdev/malloc/bdev_malloc.o 00:06:35.560 CC module/bdev/passthru/vbdev_passthru.o 00:06:35.560 CC module/blobfs/bdev/blobfs_bdev.o 00:06:35.560 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:35.560 CC module/bdev/split/vbdev_split_rpc.o 00:06:35.560 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:35.560 CC module/bdev/raid/bdev_raid.o 00:06:35.560 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:35.560 CC module/bdev/aio/bdev_aio.o 00:06:35.560 CC module/bdev/raid/bdev_raid_rpc.o 00:06:35.560 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:35.560 CC module/bdev/aio/bdev_aio_rpc.o 00:06:35.560 CC module/bdev/iscsi/bdev_iscsi.o 00:06:35.560 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:35.560 CC module/bdev/raid/bdev_raid_sb.o 00:06:35.560 CC module/bdev/raid/raid0.o 00:06:35.560 CC module/bdev/raid/raid1.o 00:06:35.560 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:35.560 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:35.821 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:35.821 CC module/bdev/nvme/bdev_nvme.o 00:06:35.821 CC module/bdev/raid/concat.o 00:06:35.821 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:35.821 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:35.821 CC module/bdev/nvme/nvme_rpc.o 00:06:35.821 CC module/bdev/ftl/bdev_ftl.o 00:06:35.821 CC module/bdev/nvme/bdev_mdns_client.o 00:06:35.821 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:35.821 CC module/bdev/nvme/vbdev_opal.o 00:06:35.821 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:35.821 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:35.821 SYMLINK libspdk_vfu_device.so 00:06:36.079 LIB libspdk_sock_posix.a 00:06:36.079 SO libspdk_sock_posix.so.6.0 00:06:36.079 LIB libspdk_bdev_split.a 00:06:36.079 LIB libspdk_blobfs_bdev.a 00:06:36.079 SO libspdk_bdev_split.so.6.0 00:06:36.079 SO libspdk_blobfs_bdev.so.6.0 00:06:36.079 LIB libspdk_bdev_passthru.a 00:06:36.079 SO libspdk_bdev_passthru.so.6.0 00:06:36.079 SYMLINK libspdk_bdev_split.so 00:06:36.079 SYMLINK libspdk_sock_posix.so 00:06:36.079 SYMLINK libspdk_blobfs_bdev.so 00:06:36.079 LIB libspdk_bdev_error.a 00:06:36.079 LIB libspdk_bdev_null.a 00:06:36.079 LIB libspdk_bdev_ftl.a 00:06:36.079 SO libspdk_bdev_error.so.6.0 00:06:36.079 SYMLINK libspdk_bdev_passthru.so 00:06:36.079 SO libspdk_bdev_null.so.6.0 00:06:36.079 LIB libspdk_bdev_gpt.a 00:06:36.336 SO libspdk_bdev_ftl.so.6.0 00:06:36.336 SO libspdk_bdev_gpt.so.6.0 00:06:36.336 SYMLINK libspdk_bdev_error.so 00:06:36.336 LIB libspdk_bdev_malloc.a 00:06:36.336 SYMLINK libspdk_bdev_null.so 00:06:36.336 LIB libspdk_bdev_zone_block.a 00:06:36.336 SYMLINK libspdk_bdev_ftl.so 00:06:36.336 SO libspdk_bdev_malloc.so.6.0 00:06:36.336 SYMLINK libspdk_bdev_gpt.so 00:06:36.336 LIB libspdk_bdev_iscsi.a 00:06:36.336 LIB libspdk_bdev_aio.a 00:06:36.336 SO libspdk_bdev_zone_block.so.6.0 00:06:36.336 SO libspdk_bdev_iscsi.so.6.0 00:06:36.336 SO libspdk_bdev_aio.so.6.0 00:06:36.336 LIB libspdk_bdev_delay.a 00:06:36.336 SYMLINK libspdk_bdev_malloc.so 00:06:36.336 LIB libspdk_bdev_lvol.a 00:06:36.336 SYMLINK libspdk_bdev_zone_block.so 00:06:36.336 SO libspdk_bdev_delay.so.6.0 00:06:36.336 SO libspdk_bdev_lvol.so.6.0 00:06:36.336 SYMLINK libspdk_bdev_iscsi.so 00:06:36.336 SYMLINK libspdk_bdev_aio.so 00:06:36.336 LIB libspdk_bdev_virtio.a 00:06:36.336 SYMLINK libspdk_bdev_delay.so 00:06:36.336 SYMLINK libspdk_bdev_lvol.so 00:06:36.336 SO libspdk_bdev_virtio.so.6.0 00:06:36.595 SYMLINK libspdk_bdev_virtio.so 00:06:36.853 LIB libspdk_bdev_raid.a 00:06:36.853 SO libspdk_bdev_raid.so.6.0 00:06:37.111 SYMLINK libspdk_bdev_raid.so 00:06:38.044 LIB libspdk_bdev_nvme.a 00:06:38.044 SO libspdk_bdev_nvme.so.7.0 00:06:38.044 SYMLINK libspdk_bdev_nvme.so 00:06:38.649 CC module/event/subsystems/iobuf/iobuf.o 00:06:38.649 CC module/event/subsystems/scheduler/scheduler.o 00:06:38.649 CC module/event/subsystems/keyring/keyring.o 00:06:38.649 CC module/event/subsystems/vmd/vmd.o 00:06:38.649 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:38.649 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:38.649 CC module/event/subsystems/sock/sock.o 00:06:38.649 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:38.649 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:38.649 LIB libspdk_event_keyring.a 00:06:38.649 LIB libspdk_event_vhost_blk.a 00:06:38.649 LIB libspdk_event_vmd.a 00:06:38.649 LIB libspdk_event_vfu_tgt.a 00:06:38.649 LIB libspdk_event_scheduler.a 00:06:38.649 LIB libspdk_event_sock.a 00:06:38.649 LIB libspdk_event_iobuf.a 00:06:38.649 SO libspdk_event_vhost_blk.so.3.0 00:06:38.649 SO libspdk_event_keyring.so.1.0 00:06:38.649 SO libspdk_event_vmd.so.6.0 00:06:38.649 SO libspdk_event_vfu_tgt.so.3.0 00:06:38.649 SO libspdk_event_scheduler.so.4.0 00:06:38.649 SO libspdk_event_sock.so.5.0 00:06:38.649 SO libspdk_event_iobuf.so.3.0 00:06:38.649 SYMLINK libspdk_event_keyring.so 00:06:38.649 SYMLINK libspdk_event_vhost_blk.so 00:06:38.649 SYMLINK libspdk_event_vfu_tgt.so 00:06:38.649 SYMLINK libspdk_event_vmd.so 00:06:38.649 SYMLINK libspdk_event_scheduler.so 00:06:38.649 SYMLINK libspdk_event_sock.so 00:06:38.649 SYMLINK libspdk_event_iobuf.so 00:06:38.907 CC module/event/subsystems/accel/accel.o 00:06:39.165 LIB libspdk_event_accel.a 00:06:39.165 SO libspdk_event_accel.so.6.0 00:06:39.165 SYMLINK libspdk_event_accel.so 00:06:39.423 CC module/event/subsystems/bdev/bdev.o 00:06:39.423 LIB libspdk_event_bdev.a 00:06:39.679 SO libspdk_event_bdev.so.6.0 00:06:39.679 SYMLINK libspdk_event_bdev.so 00:06:39.679 CC module/event/subsystems/scsi/scsi.o 00:06:39.679 CC module/event/subsystems/ublk/ublk.o 00:06:39.679 CC module/event/subsystems/nbd/nbd.o 00:06:39.679 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:39.679 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:39.935 LIB libspdk_event_ublk.a 00:06:39.935 LIB libspdk_event_nbd.a 00:06:39.935 LIB libspdk_event_scsi.a 00:06:39.935 SO libspdk_event_nbd.so.6.0 00:06:39.935 SO libspdk_event_ublk.so.3.0 00:06:39.935 SO libspdk_event_scsi.so.6.0 00:06:39.935 SYMLINK libspdk_event_nbd.so 00:06:39.935 SYMLINK libspdk_event_ublk.so 00:06:39.935 SYMLINK libspdk_event_scsi.so 00:06:39.935 LIB libspdk_event_nvmf.a 00:06:39.935 SO libspdk_event_nvmf.so.6.0 00:06:40.191 SYMLINK libspdk_event_nvmf.so 00:06:40.191 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:40.191 CC module/event/subsystems/iscsi/iscsi.o 00:06:40.191 LIB libspdk_event_vhost_scsi.a 00:06:40.448 LIB libspdk_event_iscsi.a 00:06:40.448 SO libspdk_event_vhost_scsi.so.3.0 00:06:40.448 SO libspdk_event_iscsi.so.6.0 00:06:40.448 SYMLINK libspdk_event_vhost_scsi.so 00:06:40.448 SYMLINK libspdk_event_iscsi.so 00:06:40.448 SO libspdk.so.6.0 00:06:40.448 SYMLINK libspdk.so 00:06:40.711 CC app/trace_record/trace_record.o 00:06:40.711 CC app/spdk_nvme_perf/perf.o 00:06:40.711 CXX app/trace/trace.o 00:06:40.711 CC test/rpc_client/rpc_client_test.o 00:06:40.711 CC app/spdk_top/spdk_top.o 00:06:40.711 TEST_HEADER include/spdk/accel.h 00:06:40.711 TEST_HEADER include/spdk/accel_module.h 00:06:40.711 CC app/spdk_lspci/spdk_lspci.o 00:06:40.711 TEST_HEADER include/spdk/assert.h 00:06:40.711 TEST_HEADER include/spdk/barrier.h 00:06:40.711 TEST_HEADER include/spdk/base64.h 00:06:40.711 CC app/spdk_nvme_discover/discovery_aer.o 00:06:40.711 CC app/spdk_nvme_identify/identify.o 00:06:40.711 TEST_HEADER include/spdk/bdev.h 00:06:40.711 TEST_HEADER include/spdk/bdev_module.h 00:06:40.711 TEST_HEADER include/spdk/bdev_zone.h 00:06:40.711 TEST_HEADER include/spdk/bit_array.h 00:06:40.711 TEST_HEADER include/spdk/bit_pool.h 00:06:40.711 TEST_HEADER include/spdk/blob_bdev.h 00:06:40.711 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:40.711 TEST_HEADER include/spdk/blobfs.h 00:06:40.711 TEST_HEADER include/spdk/blob.h 00:06:40.711 TEST_HEADER include/spdk/conf.h 00:06:40.711 TEST_HEADER include/spdk/cpuset.h 00:06:40.711 TEST_HEADER include/spdk/config.h 00:06:40.711 TEST_HEADER include/spdk/crc16.h 00:06:40.711 TEST_HEADER include/spdk/crc32.h 00:06:40.711 TEST_HEADER include/spdk/crc64.h 00:06:40.711 TEST_HEADER include/spdk/dif.h 00:06:40.711 TEST_HEADER include/spdk/dma.h 00:06:40.711 TEST_HEADER include/spdk/endian.h 00:06:40.711 TEST_HEADER include/spdk/env_dpdk.h 00:06:40.711 TEST_HEADER include/spdk/env.h 00:06:40.711 TEST_HEADER include/spdk/event.h 00:06:40.711 TEST_HEADER include/spdk/fd_group.h 00:06:40.711 TEST_HEADER include/spdk/fd.h 00:06:40.711 TEST_HEADER include/spdk/file.h 00:06:40.711 TEST_HEADER include/spdk/ftl.h 00:06:40.711 TEST_HEADER include/spdk/gpt_spec.h 00:06:40.711 TEST_HEADER include/spdk/hexlify.h 00:06:40.711 TEST_HEADER include/spdk/histogram_data.h 00:06:40.712 TEST_HEADER include/spdk/idxd.h 00:06:40.712 TEST_HEADER include/spdk/idxd_spec.h 00:06:40.712 TEST_HEADER include/spdk/init.h 00:06:40.712 TEST_HEADER include/spdk/ioat.h 00:06:40.712 TEST_HEADER include/spdk/ioat_spec.h 00:06:40.712 TEST_HEADER include/spdk/iscsi_spec.h 00:06:40.712 TEST_HEADER include/spdk/json.h 00:06:40.712 TEST_HEADER include/spdk/jsonrpc.h 00:06:40.712 TEST_HEADER include/spdk/keyring.h 00:06:40.712 TEST_HEADER include/spdk/keyring_module.h 00:06:40.712 TEST_HEADER include/spdk/likely.h 00:06:40.712 TEST_HEADER include/spdk/log.h 00:06:40.712 TEST_HEADER include/spdk/lvol.h 00:06:40.712 TEST_HEADER include/spdk/memory.h 00:06:40.712 TEST_HEADER include/spdk/mmio.h 00:06:40.712 TEST_HEADER include/spdk/nbd.h 00:06:40.712 TEST_HEADER include/spdk/net.h 00:06:40.712 TEST_HEADER include/spdk/notify.h 00:06:40.712 TEST_HEADER include/spdk/nvme.h 00:06:40.712 TEST_HEADER include/spdk/nvme_intel.h 00:06:40.712 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:40.712 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:40.712 TEST_HEADER include/spdk/nvme_spec.h 00:06:40.712 TEST_HEADER include/spdk/nvme_zns.h 00:06:40.712 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:40.712 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:40.712 TEST_HEADER include/spdk/nvmf.h 00:06:40.712 TEST_HEADER include/spdk/nvmf_spec.h 00:06:40.712 TEST_HEADER include/spdk/nvmf_transport.h 00:06:40.712 TEST_HEADER include/spdk/opal_spec.h 00:06:40.712 TEST_HEADER include/spdk/opal.h 00:06:40.712 TEST_HEADER include/spdk/pci_ids.h 00:06:40.712 TEST_HEADER include/spdk/pipe.h 00:06:40.712 TEST_HEADER include/spdk/queue.h 00:06:40.712 TEST_HEADER include/spdk/reduce.h 00:06:40.712 TEST_HEADER include/spdk/rpc.h 00:06:40.712 TEST_HEADER include/spdk/scheduler.h 00:06:40.712 TEST_HEADER include/spdk/scsi.h 00:06:40.712 TEST_HEADER include/spdk/scsi_spec.h 00:06:40.712 TEST_HEADER include/spdk/sock.h 00:06:40.712 TEST_HEADER include/spdk/stdinc.h 00:06:40.712 TEST_HEADER include/spdk/string.h 00:06:40.712 TEST_HEADER include/spdk/thread.h 00:06:40.712 TEST_HEADER include/spdk/trace.h 00:06:40.712 TEST_HEADER include/spdk/trace_parser.h 00:06:40.712 TEST_HEADER include/spdk/tree.h 00:06:40.712 TEST_HEADER include/spdk/ublk.h 00:06:40.712 TEST_HEADER include/spdk/util.h 00:06:40.712 TEST_HEADER include/spdk/uuid.h 00:06:40.712 TEST_HEADER include/spdk/version.h 00:06:40.712 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:40.712 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:40.712 TEST_HEADER include/spdk/vhost.h 00:06:40.712 TEST_HEADER include/spdk/vmd.h 00:06:40.712 TEST_HEADER include/spdk/xor.h 00:06:40.712 TEST_HEADER include/spdk/zipf.h 00:06:40.712 CXX test/cpp_headers/accel.o 00:06:40.712 CXX test/cpp_headers/accel_module.o 00:06:40.712 CXX test/cpp_headers/assert.o 00:06:40.712 CXX test/cpp_headers/barrier.o 00:06:40.712 CXX test/cpp_headers/base64.o 00:06:40.712 CXX test/cpp_headers/bdev.o 00:06:40.712 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:40.712 CXX test/cpp_headers/bdev_module.o 00:06:40.712 CXX test/cpp_headers/bdev_zone.o 00:06:40.712 CXX test/cpp_headers/bit_array.o 00:06:40.712 CXX test/cpp_headers/bit_pool.o 00:06:40.712 CXX test/cpp_headers/blob_bdev.o 00:06:40.712 CXX test/cpp_headers/blobfs_bdev.o 00:06:40.712 CXX test/cpp_headers/blobfs.o 00:06:40.712 CXX test/cpp_headers/blob.o 00:06:40.712 CXX test/cpp_headers/conf.o 00:06:40.712 CC app/spdk_dd/spdk_dd.o 00:06:40.712 CXX test/cpp_headers/config.o 00:06:40.712 CXX test/cpp_headers/cpuset.o 00:06:40.712 CC app/iscsi_tgt/iscsi_tgt.o 00:06:40.712 CC app/nvmf_tgt/nvmf_main.o 00:06:40.712 CXX test/cpp_headers/crc16.o 00:06:40.712 CXX test/cpp_headers/crc32.o 00:06:40.712 CC app/spdk_tgt/spdk_tgt.o 00:06:40.712 CC examples/ioat/verify/verify.o 00:06:40.972 CC test/app/jsoncat/jsoncat.o 00:06:40.972 CC test/app/histogram_perf/histogram_perf.o 00:06:40.972 CC examples/util/zipf/zipf.o 00:06:40.972 CC test/app/stub/stub.o 00:06:40.972 CC test/env/vtophys/vtophys.o 00:06:40.972 CC test/env/memory/memory_ut.o 00:06:40.972 CC test/thread/poller_perf/poller_perf.o 00:06:40.972 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:40.972 CC examples/ioat/perf/perf.o 00:06:40.972 CC app/fio/nvme/fio_plugin.o 00:06:40.972 CC test/env/pci/pci_ut.o 00:06:40.972 CC test/dma/test_dma/test_dma.o 00:06:40.972 CC test/app/bdev_svc/bdev_svc.o 00:06:40.972 CC app/fio/bdev/fio_plugin.o 00:06:40.972 CC test/env/mem_callbacks/mem_callbacks.o 00:06:40.972 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:40.972 LINK spdk_lspci 00:06:41.231 LINK rpc_client_test 00:06:41.231 LINK spdk_nvme_discover 00:06:41.231 LINK jsoncat 00:06:41.231 LINK histogram_perf 00:06:41.231 LINK vtophys 00:06:41.231 CXX test/cpp_headers/crc64.o 00:06:41.231 LINK interrupt_tgt 00:06:41.231 LINK poller_perf 00:06:41.231 CXX test/cpp_headers/dif.o 00:06:41.231 LINK zipf 00:06:41.231 LINK env_dpdk_post_init 00:06:41.231 CXX test/cpp_headers/dma.o 00:06:41.231 CXX test/cpp_headers/endian.o 00:06:41.231 CXX test/cpp_headers/env_dpdk.o 00:06:41.231 CXX test/cpp_headers/env.o 00:06:41.231 LINK stub 00:06:41.231 LINK nvmf_tgt 00:06:41.231 CXX test/cpp_headers/event.o 00:06:41.231 CXX test/cpp_headers/fd_group.o 00:06:41.231 LINK spdk_trace_record 00:06:41.231 CXX test/cpp_headers/fd.o 00:06:41.231 CXX test/cpp_headers/file.o 00:06:41.231 CXX test/cpp_headers/ftl.o 00:06:41.231 LINK iscsi_tgt 00:06:41.231 CXX test/cpp_headers/gpt_spec.o 00:06:41.231 CXX test/cpp_headers/hexlify.o 00:06:41.231 LINK verify 00:06:41.231 CXX test/cpp_headers/histogram_data.o 00:06:41.231 CXX test/cpp_headers/idxd.o 00:06:41.231 LINK ioat_perf 00:06:41.231 LINK bdev_svc 00:06:41.231 LINK spdk_tgt 00:06:41.231 CXX test/cpp_headers/idxd_spec.o 00:06:41.493 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:41.493 CXX test/cpp_headers/init.o 00:06:41.493 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:41.493 CXX test/cpp_headers/ioat.o 00:06:41.493 LINK mem_callbacks 00:06:41.493 CXX test/cpp_headers/ioat_spec.o 00:06:41.493 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:41.493 CXX test/cpp_headers/iscsi_spec.o 00:06:41.493 CXX test/cpp_headers/json.o 00:06:41.493 CXX test/cpp_headers/jsonrpc.o 00:06:41.493 CXX test/cpp_headers/keyring.o 00:06:41.493 CXX test/cpp_headers/keyring_module.o 00:06:41.493 CXX test/cpp_headers/likely.o 00:06:41.493 LINK spdk_trace 00:06:41.493 LINK spdk_dd 00:06:41.493 CXX test/cpp_headers/log.o 00:06:41.493 CXX test/cpp_headers/lvol.o 00:06:41.756 CXX test/cpp_headers/memory.o 00:06:41.756 CXX test/cpp_headers/mmio.o 00:06:41.756 CXX test/cpp_headers/nbd.o 00:06:41.756 CXX test/cpp_headers/net.o 00:06:41.756 CXX test/cpp_headers/notify.o 00:06:41.756 LINK pci_ut 00:06:41.756 CXX test/cpp_headers/nvme.o 00:06:41.756 CXX test/cpp_headers/nvme_intel.o 00:06:41.756 CXX test/cpp_headers/nvme_ocssd.o 00:06:41.756 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:41.756 CXX test/cpp_headers/nvme_spec.o 00:06:41.756 CXX test/cpp_headers/nvme_zns.o 00:06:41.756 LINK test_dma 00:06:41.756 CXX test/cpp_headers/nvmf_cmd.o 00:06:41.756 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:41.756 CXX test/cpp_headers/nvmf.o 00:06:41.756 CXX test/cpp_headers/nvmf_spec.o 00:06:41.756 CXX test/cpp_headers/nvmf_transport.o 00:06:41.756 CXX test/cpp_headers/opal.o 00:06:41.756 CXX test/cpp_headers/opal_spec.o 00:06:41.756 CXX test/cpp_headers/pci_ids.o 00:06:41.756 CC test/event/event_perf/event_perf.o 00:06:41.756 CC test/event/reactor/reactor.o 00:06:41.756 CC test/event/reactor_perf/reactor_perf.o 00:06:41.756 LINK nvme_fuzz 00:06:41.756 CXX test/cpp_headers/pipe.o 00:06:42.018 CC test/event/app_repeat/app_repeat.o 00:06:42.018 CXX test/cpp_headers/queue.o 00:06:42.018 CXX test/cpp_headers/reduce.o 00:06:42.018 CXX test/cpp_headers/rpc.o 00:06:42.018 CC examples/sock/hello_world/hello_sock.o 00:06:42.018 CXX test/cpp_headers/scheduler.o 00:06:42.018 CC examples/vmd/lsvmd/lsvmd.o 00:06:42.018 CC examples/idxd/perf/perf.o 00:06:42.018 CXX test/cpp_headers/scsi.o 00:06:42.018 CXX test/cpp_headers/scsi_spec.o 00:06:42.018 CXX test/cpp_headers/sock.o 00:06:42.018 LINK spdk_bdev 00:06:42.018 LINK spdk_nvme 00:06:42.018 CXX test/cpp_headers/stdinc.o 00:06:42.018 CC test/event/scheduler/scheduler.o 00:06:42.018 CC examples/thread/thread/thread_ex.o 00:06:42.018 CXX test/cpp_headers/string.o 00:06:42.018 CXX test/cpp_headers/thread.o 00:06:42.018 CXX test/cpp_headers/trace.o 00:06:42.018 CC examples/vmd/led/led.o 00:06:42.018 CXX test/cpp_headers/trace_parser.o 00:06:42.018 CXX test/cpp_headers/tree.o 00:06:42.018 CXX test/cpp_headers/ublk.o 00:06:42.018 CXX test/cpp_headers/util.o 00:06:42.018 CXX test/cpp_headers/uuid.o 00:06:42.287 LINK reactor 00:06:42.287 CXX test/cpp_headers/version.o 00:06:42.287 LINK event_perf 00:06:42.287 CXX test/cpp_headers/vfio_user_pci.o 00:06:42.287 CXX test/cpp_headers/vfio_user_spec.o 00:06:42.287 CXX test/cpp_headers/vhost.o 00:06:42.287 LINK reactor_perf 00:06:42.287 CXX test/cpp_headers/vmd.o 00:06:42.287 CXX test/cpp_headers/xor.o 00:06:42.287 CXX test/cpp_headers/zipf.o 00:06:42.287 CC app/vhost/vhost.o 00:06:42.287 LINK spdk_nvme_perf 00:06:42.287 LINK lsvmd 00:06:42.287 LINK app_repeat 00:06:42.287 LINK memory_ut 00:06:42.287 LINK vhost_fuzz 00:06:42.287 LINK spdk_nvme_identify 00:06:42.287 LINK led 00:06:42.287 LINK spdk_top 00:06:42.544 LINK hello_sock 00:06:42.544 CC test/nvme/aer/aer.o 00:06:42.544 CC test/nvme/err_injection/err_injection.o 00:06:42.544 CC test/nvme/reserve/reserve.o 00:06:42.544 CC test/nvme/startup/startup.o 00:06:42.544 CC test/nvme/e2edp/nvme_dp.o 00:06:42.544 CC test/nvme/sgl/sgl.o 00:06:42.544 CC test/nvme/boot_partition/boot_partition.o 00:06:42.544 CC test/nvme/overhead/overhead.o 00:06:42.544 CC test/nvme/reset/reset.o 00:06:42.544 CC test/nvme/connect_stress/connect_stress.o 00:06:42.544 CC test/nvme/simple_copy/simple_copy.o 00:06:42.544 CC test/accel/dif/dif.o 00:06:42.544 LINK scheduler 00:06:42.544 CC test/blobfs/mkfs/mkfs.o 00:06:42.544 CC test/nvme/compliance/nvme_compliance.o 00:06:42.544 CC test/nvme/fused_ordering/fused_ordering.o 00:06:42.544 LINK thread 00:06:42.544 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:42.544 CC test/nvme/fdp/fdp.o 00:06:42.544 CC test/nvme/cuse/cuse.o 00:06:42.544 CC test/lvol/esnap/esnap.o 00:06:42.544 LINK vhost 00:06:42.544 LINK idxd_perf 00:06:42.811 LINK boot_partition 00:06:42.811 LINK startup 00:06:42.811 LINK err_injection 00:06:42.811 LINK connect_stress 00:06:42.811 LINK mkfs 00:06:42.811 LINK fused_ordering 00:06:42.811 LINK simple_copy 00:06:42.811 LINK nvme_dp 00:06:42.811 LINK sgl 00:06:42.811 LINK reserve 00:06:42.811 LINK overhead 00:06:42.811 CC examples/nvme/reconnect/reconnect.o 00:06:42.811 CC examples/nvme/abort/abort.o 00:06:42.811 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:42.811 LINK aer 00:06:42.811 CC examples/nvme/hello_world/hello_world.o 00:06:42.811 LINK reset 00:06:42.811 CC examples/nvme/hotplug/hotplug.o 00:06:42.811 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:42.811 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:42.811 CC examples/nvme/arbitration/arbitration.o 00:06:42.811 LINK doorbell_aers 00:06:42.811 LINK nvme_compliance 00:06:43.067 CC examples/accel/perf/accel_perf.o 00:06:43.067 LINK dif 00:06:43.067 CC examples/blob/cli/blobcli.o 00:06:43.067 CC examples/blob/hello_world/hello_blob.o 00:06:43.067 LINK fdp 00:06:43.067 LINK cmb_copy 00:06:43.067 LINK hello_world 00:06:43.067 LINK hotplug 00:06:43.067 LINK pmr_persistence 00:06:43.324 LINK arbitration 00:06:43.324 LINK reconnect 00:06:43.324 LINK abort 00:06:43.324 LINK hello_blob 00:06:43.324 CC test/bdev/bdevio/bdevio.o 00:06:43.581 LINK nvme_manage 00:06:43.581 LINK accel_perf 00:06:43.581 LINK blobcli 00:06:43.837 LINK iscsi_fuzz 00:06:43.837 LINK bdevio 00:06:43.837 CC examples/bdev/hello_world/hello_bdev.o 00:06:43.837 CC examples/bdev/bdevperf/bdevperf.o 00:06:44.094 LINK hello_bdev 00:06:44.094 LINK cuse 00:06:44.657 LINK bdevperf 00:06:44.913 CC examples/nvmf/nvmf/nvmf.o 00:06:45.171 LINK nvmf 00:06:47.701 LINK esnap 00:06:47.959 00:06:47.959 real 0m41.082s 00:06:47.959 user 7m22.619s 00:06:47.959 sys 1m48.146s 00:06:47.959 08:02:57 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:47.959 08:02:57 make -- common/autotest_common.sh@10 -- $ set +x 00:06:47.959 ************************************ 00:06:47.959 END TEST make 00:06:47.959 ************************************ 00:06:47.959 08:02:57 -- common/autotest_common.sh@1142 -- $ return 0 00:06:47.959 08:02:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:47.959 08:02:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:47.959 08:02:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:47.959 08:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:47.959 08:02:57 -- pm/common@44 -- $ pid=3880955 00:06:47.959 08:02:57 -- pm/common@50 -- $ kill -TERM 3880955 00:06:47.959 08:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:47.959 08:02:57 -- pm/common@44 -- $ pid=3880957 00:06:47.959 08:02:57 -- pm/common@50 -- $ kill -TERM 3880957 00:06:47.959 08:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:47.959 08:02:57 -- pm/common@44 -- $ pid=3880959 00:06:47.959 08:02:57 -- pm/common@50 -- $ kill -TERM 3880959 00:06:47.959 08:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:47.959 08:02:57 -- pm/common@44 -- $ pid=3880987 00:06:47.959 08:02:57 -- pm/common@50 -- $ sudo -E kill -TERM 3880987 00:06:47.959 08:02:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.959 08:02:57 -- nvmf/common.sh@7 -- # uname -s 00:06:47.959 08:02:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.959 08:02:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.959 08:02:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.959 08:02:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.959 08:02:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.959 08:02:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.959 08:02:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.959 08:02:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.959 08:02:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.959 08:02:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.959 08:02:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.959 08:02:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.959 08:02:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.959 08:02:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.959 08:02:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.959 08:02:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.959 08:02:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.959 08:02:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.959 08:02:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.959 08:02:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.959 08:02:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.959 08:02:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.959 08:02:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.959 08:02:57 -- paths/export.sh@5 -- # export PATH 00:06:47.959 08:02:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.959 08:02:57 -- nvmf/common.sh@47 -- # : 0 00:06:47.959 08:02:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.959 08:02:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.959 08:02:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.959 08:02:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.959 08:02:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.959 08:02:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.959 08:02:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.959 08:02:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.959 08:02:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:47.959 08:02:57 -- spdk/autotest.sh@32 -- # uname -s 00:06:47.959 08:02:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:47.959 08:02:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:47.959 08:02:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:47.959 08:02:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:47.959 08:02:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:47.959 08:02:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:47.959 08:02:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:47.959 08:02:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:47.959 08:02:57 -- spdk/autotest.sh@48 -- # udevadm_pid=3956634 00:06:47.959 08:02:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:47.959 08:02:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:47.959 08:02:57 -- pm/common@17 -- # local monitor 00:06:47.959 08:02:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@21 -- # date +%s 00:06:47.959 08:02:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.959 08:02:57 -- pm/common@21 -- # date +%s 00:06:47.959 08:02:57 -- pm/common@25 -- # sleep 1 00:06:47.959 08:02:57 -- pm/common@21 -- # date +%s 00:06:47.959 08:02:57 -- pm/common@21 -- # date +%s 00:06:47.959 08:02:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721541777 00:06:47.959 08:02:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721541777 00:06:47.959 08:02:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721541777 00:06:47.959 08:02:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721541777 00:06:47.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721541777_collect-vmstat.pm.log 00:06:47.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721541777_collect-cpu-load.pm.log 00:06:47.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721541777_collect-cpu-temp.pm.log 00:06:47.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721541777_collect-bmc-pm.bmc.pm.log 00:06:48.889 08:02:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:48.889 08:02:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:48.889 08:02:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.889 08:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:48.889 08:02:58 -- spdk/autotest.sh@59 -- # create_test_list 00:06:48.889 08:02:58 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:48.889 08:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:48.890 08:02:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:49.146 08:02:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.146 08:02:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.146 08:02:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:49.146 08:02:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.146 08:02:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:49.146 08:02:58 -- common/autotest_common.sh@1455 -- # uname 00:06:49.146 08:02:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:49.146 08:02:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:49.146 08:02:58 -- common/autotest_common.sh@1475 -- # uname 00:06:49.146 08:02:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:49.146 08:02:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:49.146 08:02:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:49.146 08:02:58 -- spdk/autotest.sh@72 -- # hash lcov 00:06:49.146 08:02:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:49.146 08:02:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:49.146 --rc lcov_branch_coverage=1 00:06:49.146 --rc lcov_function_coverage=1 00:06:49.146 --rc genhtml_branch_coverage=1 00:06:49.146 --rc genhtml_function_coverage=1 00:06:49.146 --rc genhtml_legend=1 00:06:49.146 --rc geninfo_all_blocks=1 00:06:49.146 ' 00:06:49.146 08:02:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:49.146 --rc lcov_branch_coverage=1 00:06:49.146 --rc lcov_function_coverage=1 00:06:49.146 --rc genhtml_branch_coverage=1 00:06:49.146 --rc genhtml_function_coverage=1 00:06:49.146 --rc genhtml_legend=1 00:06:49.146 --rc geninfo_all_blocks=1 00:06:49.146 ' 00:06:49.146 08:02:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:49.146 --rc lcov_branch_coverage=1 00:06:49.146 --rc lcov_function_coverage=1 00:06:49.146 --rc genhtml_branch_coverage=1 00:06:49.146 --rc genhtml_function_coverage=1 00:06:49.146 --rc genhtml_legend=1 00:06:49.146 --rc geninfo_all_blocks=1 00:06:49.146 --no-external' 00:06:49.146 08:02:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:49.146 --rc lcov_branch_coverage=1 00:06:49.146 --rc lcov_function_coverage=1 00:06:49.146 --rc genhtml_branch_coverage=1 00:06:49.146 --rc genhtml_function_coverage=1 00:06:49.146 --rc genhtml_legend=1 00:06:49.146 --rc geninfo_all_blocks=1 00:06:49.146 --no-external' 00:06:49.146 08:02:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:49.146 lcov: LCOV version 1.14 00:06:49.146 08:02:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:07.240 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:19.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:19.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:07:19.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:19.453 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:07:22.745 08:03:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:07:22.745 08:03:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.745 08:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:22.745 08:03:31 -- spdk/autotest.sh@91 -- # rm -f 00:07:22.745 08:03:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:23.310 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:07:23.310 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:07:23.310 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:07:23.310 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:07:23.310 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:07:23.310 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:07:23.310 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:07:23.310 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:07:23.310 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:07:23.310 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:07:23.310 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:07:23.310 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:07:23.310 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:07:23.310 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:07:23.310 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:07:23.310 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:07:23.310 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:07:23.578 08:03:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:07:23.578 08:03:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:23.578 08:03:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:23.578 08:03:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:23.578 08:03:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:23.578 08:03:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:23.578 08:03:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:23.578 08:03:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:23.578 08:03:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:23.578 08:03:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:07:23.578 08:03:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:07:23.578 08:03:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:07:23.578 08:03:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:07:23.578 08:03:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:07:23.578 08:03:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:23.578 No valid GPT data, bailing 00:07:23.578 08:03:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:23.578 08:03:33 -- scripts/common.sh@391 -- # pt= 00:07:23.578 08:03:33 -- scripts/common.sh@392 -- # return 1 00:07:23.578 08:03:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:23.578 1+0 records in 00:07:23.578 1+0 records out 00:07:23.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492557 s, 213 MB/s 00:07:23.578 08:03:33 -- spdk/autotest.sh@118 -- # sync 00:07:23.578 08:03:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:23.578 08:03:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:23.578 08:03:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:25.477 08:03:34 -- spdk/autotest.sh@124 -- # uname -s 00:07:25.477 08:03:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:07:25.477 08:03:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:07:25.477 08:03:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.477 08:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.477 08:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.477 ************************************ 00:07:25.477 START TEST setup.sh 00:07:25.477 ************************************ 00:07:25.477 08:03:34 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:07:25.477 * Looking for test storage... 00:07:25.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:25.477 08:03:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:07:25.477 08:03:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:07:25.477 08:03:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:07:25.477 08:03:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.477 08:03:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.477 08:03:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:25.477 ************************************ 00:07:25.477 START TEST acl 00:07:25.477 ************************************ 00:07:25.477 08:03:34 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:07:25.477 * Looking for test storage... 00:07:25.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:25.477 08:03:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:07:25.477 08:03:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:07:25.477 08:03:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:25.477 08:03:35 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:26.851 08:03:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:07:26.851 08:03:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:07:26.851 08:03:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:26.851 08:03:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:07:26.851 08:03:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:07:26.851 08:03:36 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:28.222 Hugepages 00:07:28.222 node hugesize free / total 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 00:07:28.222 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:07:28.222 08:03:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:07:28.222 08:03:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.222 08:03:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.222 08:03:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:28.222 ************************************ 00:07:28.222 START TEST denied 00:07:28.222 ************************************ 00:07:28.222 08:03:37 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:07:28.222 08:03:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:07:28.222 08:03:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:07:28.222 08:03:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:07:28.222 08:03:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:07:28.222 08:03:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:29.598 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:29.598 08:03:39 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:32.140 00:07:32.140 real 0m3.790s 00:07:32.140 user 0m1.100s 00:07:32.140 sys 0m1.776s 00:07:32.140 08:03:41 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.140 08:03:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:32.140 ************************************ 00:07:32.140 END TEST denied 00:07:32.140 ************************************ 00:07:32.140 08:03:41 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:32.140 08:03:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:32.140 08:03:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.140 08:03:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.140 08:03:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:32.140 ************************************ 00:07:32.140 START TEST allowed 00:07:32.140 ************************************ 00:07:32.140 08:03:41 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:07:32.140 08:03:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:07:32.140 08:03:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:32.140 08:03:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:07:32.140 08:03:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:32.140 08:03:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:34.663 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:07:34.663 08:03:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:07:34.663 08:03:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:34.663 08:03:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:34.663 08:03:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:34.663 08:03:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:35.595 00:07:35.596 real 0m3.717s 00:07:35.596 user 0m0.958s 00:07:35.596 sys 0m1.600s 00:07:35.596 08:03:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.596 08:03:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:35.596 ************************************ 00:07:35.596 END TEST allowed 00:07:35.596 ************************************ 00:07:35.596 08:03:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:07:35.596 00:07:35.596 real 0m10.228s 00:07:35.596 user 0m3.152s 00:07:35.596 sys 0m5.074s 00:07:35.596 08:03:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.596 08:03:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:35.596 ************************************ 00:07:35.596 END TEST acl 00:07:35.596 ************************************ 00:07:35.596 08:03:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:35.596 08:03:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:35.596 08:03:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.596 08:03:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.596 08:03:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:35.855 ************************************ 00:07:35.855 START TEST hugepages 00:07:35.855 ************************************ 00:07:35.855 08:03:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:35.855 * Looking for test storage... 00:07:35.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41701536 kB' 'MemAvailable: 45206268 kB' 'Buffers: 2704 kB' 'Cached: 12266488 kB' 'SwapCached: 0 kB' 'Active: 9236720 kB' 'Inactive: 3506364 kB' 'Active(anon): 8842312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477184 kB' 'Mapped: 183712 kB' 'Shmem: 8368420 kB' 'KReclaimable: 195164 kB' 'Slab: 558828 kB' 'SReclaimable: 195164 kB' 'SUnreclaim: 363664 kB' 'KernelStack: 12944 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9976980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.855 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.856 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:35.857 08:03:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:35.857 08:03:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.857 08:03:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.857 08:03:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:35.857 ************************************ 00:07:35.857 START TEST default_setup 00:07:35.857 ************************************ 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.857 08:03:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:37.228 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:37.228 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:37.228 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:38.171 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43830120 kB' 'MemAvailable: 47334832 kB' 'Buffers: 2704 kB' 'Cached: 12266588 kB' 'SwapCached: 0 kB' 'Active: 9254952 kB' 'Inactive: 3506364 kB' 'Active(anon): 8860544 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495244 kB' 'Mapped: 183744 kB' 'Shmem: 8368520 kB' 'KReclaimable: 195124 kB' 'Slab: 558324 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363200 kB' 'KernelStack: 12832 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9997760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.171 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43840844 kB' 'MemAvailable: 47345556 kB' 'Buffers: 2704 kB' 'Cached: 12266592 kB' 'SwapCached: 0 kB' 'Active: 9255148 kB' 'Inactive: 3506364 kB' 'Active(anon): 8860740 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495532 kB' 'Mapped: 183748 kB' 'Shmem: 8368524 kB' 'KReclaimable: 195124 kB' 'Slab: 558296 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363172 kB' 'KernelStack: 12816 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9997776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.172 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.173 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841256 kB' 'MemAvailable: 47345968 kB' 'Buffers: 2704 kB' 'Cached: 12266608 kB' 'SwapCached: 0 kB' 'Active: 9255056 kB' 'Inactive: 3506364 kB' 'Active(anon): 8860648 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495436 kB' 'Mapped: 183748 kB' 'Shmem: 8368540 kB' 'KReclaimable: 195124 kB' 'Slab: 558296 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363172 kB' 'KernelStack: 12816 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9997796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.174 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:38.175 nr_hugepages=1024 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:38.175 resv_hugepages=0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:38.175 surplus_hugepages=0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:38.175 anon_hugepages=0 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43840688 kB' 'MemAvailable: 47345400 kB' 'Buffers: 2704 kB' 'Cached: 12266632 kB' 'SwapCached: 0 kB' 'Active: 9255112 kB' 'Inactive: 3506364 kB' 'Active(anon): 8860704 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495480 kB' 'Mapped: 183748 kB' 'Shmem: 8368564 kB' 'KReclaimable: 195124 kB' 'Slab: 558332 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363208 kB' 'KernelStack: 12848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9997820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.175 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.176 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25657432 kB' 'MemUsed: 7172452 kB' 'SwapCached: 0 kB' 'Active: 3968272 kB' 'Inactive: 154724 kB' 'Active(anon): 3807620 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3861700 kB' 'Mapped: 65508 kB' 'AnonPages: 264468 kB' 'Shmem: 3546324 kB' 'KernelStack: 7496 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103684 kB' 'Slab: 331268 kB' 'SReclaimable: 103684 kB' 'SUnreclaim: 227584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.177 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:38.178 node0=1024 expecting 1024 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:38.178 00:07:38.178 real 0m2.415s 00:07:38.178 user 0m0.649s 00:07:38.178 sys 0m0.892s 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.178 08:03:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:38.178 ************************************ 00:07:38.178 END TEST default_setup 00:07:38.178 ************************************ 00:07:38.435 08:03:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:38.435 08:03:47 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:38.435 08:03:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.435 08:03:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.435 08:03:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:38.435 ************************************ 00:07:38.435 START TEST per_node_1G_alloc 00:07:38.435 ************************************ 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:38.435 08:03:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:39.393 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:39.393 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:39.393 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:39.393 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:39.393 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:39.393 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:39.393 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:39.393 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:39.393 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:39.393 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:39.393 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:39.393 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:39.393 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:39.393 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:39.393 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:39.393 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:39.393 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43857528 kB' 'MemAvailable: 47362240 kB' 'Buffers: 2704 kB' 'Cached: 12274548 kB' 'SwapCached: 0 kB' 'Active: 9263480 kB' 'Inactive: 3506364 kB' 'Active(anon): 8869072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495896 kB' 'Mapped: 183840 kB' 'Shmem: 8376480 kB' 'KReclaimable: 195124 kB' 'Slab: 558376 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363252 kB' 'KernelStack: 12912 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10006192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.657 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.658 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43857516 kB' 'MemAvailable: 47362228 kB' 'Buffers: 2704 kB' 'Cached: 12274896 kB' 'SwapCached: 0 kB' 'Active: 9264012 kB' 'Inactive: 3506364 kB' 'Active(anon): 8869604 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496092 kB' 'Mapped: 183564 kB' 'Shmem: 8376828 kB' 'KReclaimable: 195124 kB' 'Slab: 558344 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363220 kB' 'KernelStack: 12880 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10006208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.659 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.660 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43858268 kB' 'MemAvailable: 47362980 kB' 'Buffers: 2704 kB' 'Cached: 12274916 kB' 'SwapCached: 0 kB' 'Active: 9263540 kB' 'Inactive: 3506364 kB' 'Active(anon): 8869132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495596 kB' 'Mapped: 183564 kB' 'Shmem: 8376848 kB' 'KReclaimable: 195124 kB' 'Slab: 558404 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363280 kB' 'KernelStack: 12864 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10006232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.661 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.662 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:39.663 nr_hugepages=1024 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:39.663 resv_hugepages=0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:39.663 surplus_hugepages=0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:39.663 anon_hugepages=0 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43858016 kB' 'MemAvailable: 47362728 kB' 'Buffers: 2704 kB' 'Cached: 12274936 kB' 'SwapCached: 0 kB' 'Active: 9263908 kB' 'Inactive: 3506364 kB' 'Active(anon): 8869500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495928 kB' 'Mapped: 183564 kB' 'Shmem: 8376868 kB' 'KReclaimable: 195124 kB' 'Slab: 558404 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363280 kB' 'KernelStack: 12880 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10006252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.663 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:39.664 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26714772 kB' 'MemUsed: 6115112 kB' 'SwapCached: 0 kB' 'Active: 3967820 kB' 'Inactive: 154724 kB' 'Active(anon): 3807168 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3861812 kB' 'Mapped: 65516 kB' 'AnonPages: 263920 kB' 'Shmem: 3546436 kB' 'KernelStack: 7496 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103684 kB' 'Slab: 331268 kB' 'SReclaimable: 103684 kB' 'SUnreclaim: 227584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.665 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17143244 kB' 'MemUsed: 10568580 kB' 'SwapCached: 0 kB' 'Active: 5295756 kB' 'Inactive: 3351640 kB' 'Active(anon): 5062000 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8415852 kB' 'Mapped: 118048 kB' 'AnonPages: 231672 kB' 'Shmem: 4830456 kB' 'KernelStack: 5368 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91440 kB' 'Slab: 227136 kB' 'SReclaimable: 91440 kB' 'SUnreclaim: 135696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.666 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.667 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:39.668 node0=512 expecting 512 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:39.668 node1=512 expecting 512 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:39.668 00:07:39.668 real 0m1.400s 00:07:39.668 user 0m0.604s 00:07:39.668 sys 0m0.757s 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.668 08:03:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:39.668 ************************************ 00:07:39.668 END TEST per_node_1G_alloc 00:07:39.668 ************************************ 00:07:39.668 08:03:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:39.668 08:03:49 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:39.668 08:03:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.668 08:03:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.668 08:03:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:39.668 ************************************ 00:07:39.668 START TEST even_2G_alloc 00:07:39.668 ************************************ 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:39.668 08:03:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:41.045 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:41.045 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:41.045 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:41.045 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:41.045 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:41.045 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:41.045 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:41.045 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:41.045 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:41.045 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:41.045 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:41.045 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:41.045 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:41.045 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:41.045 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:41.045 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:41.045 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43847064 kB' 'MemAvailable: 47351784 kB' 'Buffers: 2704 kB' 'Cached: 12275032 kB' 'SwapCached: 0 kB' 'Active: 9260644 kB' 'Inactive: 3506364 kB' 'Active(anon): 8866236 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492488 kB' 'Mapped: 182548 kB' 'Shmem: 8376964 kB' 'KReclaimable: 195140 kB' 'Slab: 558296 kB' 'SReclaimable: 195140 kB' 'SUnreclaim: 363156 kB' 'KernelStack: 12784 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9992396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.046 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43847332 kB' 'MemAvailable: 47352044 kB' 'Buffers: 2704 kB' 'Cached: 12275036 kB' 'SwapCached: 0 kB' 'Active: 9260348 kB' 'Inactive: 3506364 kB' 'Active(anon): 8865940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492196 kB' 'Mapped: 182500 kB' 'Shmem: 8376968 kB' 'KReclaimable: 195124 kB' 'Slab: 558240 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363116 kB' 'KernelStack: 12832 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9992412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.047 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.048 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43847680 kB' 'MemAvailable: 47352392 kB' 'Buffers: 2704 kB' 'Cached: 12275056 kB' 'SwapCached: 0 kB' 'Active: 9260284 kB' 'Inactive: 3506364 kB' 'Active(anon): 8865876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492108 kB' 'Mapped: 182500 kB' 'Shmem: 8376988 kB' 'KReclaimable: 195124 kB' 'Slab: 558248 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363124 kB' 'KernelStack: 12832 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9992432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.049 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.050 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:41.051 nr_hugepages=1024 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:41.051 resv_hugepages=0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:41.051 surplus_hugepages=0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:41.051 anon_hugepages=0 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43848504 kB' 'MemAvailable: 47353216 kB' 'Buffers: 2704 kB' 'Cached: 12275056 kB' 'SwapCached: 0 kB' 'Active: 9259996 kB' 'Inactive: 3506364 kB' 'Active(anon): 8865588 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491824 kB' 'Mapped: 182500 kB' 'Shmem: 8376988 kB' 'KReclaimable: 195124 kB' 'Slab: 558248 kB' 'SReclaimable: 195124 kB' 'SUnreclaim: 363124 kB' 'KernelStack: 12832 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9992456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.051 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:41.052 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26713272 kB' 'MemUsed: 6116612 kB' 'SwapCached: 0 kB' 'Active: 3966556 kB' 'Inactive: 154724 kB' 'Active(anon): 3805904 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3861936 kB' 'Mapped: 64456 kB' 'AnonPages: 262472 kB' 'Shmem: 3546560 kB' 'KernelStack: 7512 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103708 kB' 'Slab: 331272 kB' 'SReclaimable: 103708 kB' 'SUnreclaim: 227564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.053 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17133216 kB' 'MemUsed: 10578608 kB' 'SwapCached: 0 kB' 'Active: 5295576 kB' 'Inactive: 3351640 kB' 'Active(anon): 5061820 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8415864 kB' 'Mapped: 118480 kB' 'AnonPages: 231488 kB' 'Shmem: 4830468 kB' 'KernelStack: 5320 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91416 kB' 'Slab: 226968 kB' 'SReclaimable: 91416 kB' 'SUnreclaim: 135552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.054 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.055 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:41.327 node0=512 expecting 512 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:41.327 node1=512 expecting 512 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:41.327 00:07:41.327 real 0m1.408s 00:07:41.327 user 0m0.591s 00:07:41.327 sys 0m0.775s 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.327 08:03:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.327 ************************************ 00:07:41.327 END TEST even_2G_alloc 00:07:41.327 ************************************ 00:07:41.327 08:03:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:41.327 08:03:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:41.327 08:03:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.327 08:03:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.327 08:03:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:41.327 ************************************ 00:07:41.327 START TEST odd_alloc 00:07:41.327 ************************************ 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.327 08:03:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:42.259 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:42.259 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:42.259 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:42.521 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:42.521 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:42.521 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:42.521 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:42.521 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:42.521 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:42.521 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:42.521 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:42.521 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:42.521 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:42.521 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:42.521 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:42.521 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:42.521 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43830092 kB' 'MemAvailable: 47334800 kB' 'Buffers: 2704 kB' 'Cached: 12275164 kB' 'SwapCached: 0 kB' 'Active: 9261580 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493328 kB' 'Mapped: 182576 kB' 'Shmem: 8377096 kB' 'KReclaimable: 195116 kB' 'Slab: 557964 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362848 kB' 'KernelStack: 13312 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9995184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196528 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.521 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43834588 kB' 'MemAvailable: 47339296 kB' 'Buffers: 2704 kB' 'Cached: 12275164 kB' 'SwapCached: 0 kB' 'Active: 9262372 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867964 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494200 kB' 'Mapped: 182636 kB' 'Shmem: 8377096 kB' 'KReclaimable: 195116 kB' 'Slab: 557964 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362848 kB' 'KernelStack: 13424 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9993832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196464 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.522 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43842460 kB' 'MemAvailable: 47347168 kB' 'Buffers: 2704 kB' 'Cached: 12275164 kB' 'SwapCached: 0 kB' 'Active: 9262412 kB' 'Inactive: 3506364 kB' 'Active(anon): 8868004 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494124 kB' 'Mapped: 182472 kB' 'Shmem: 8377096 kB' 'KReclaimable: 195116 kB' 'Slab: 558044 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362928 kB' 'KernelStack: 13376 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9993852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.523 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.524 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.525 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.525 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.525 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:42.833 nr_hugepages=1025 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:42.833 resv_hugepages=0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:42.833 surplus_hugepages=0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:42.833 anon_hugepages=0 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.833 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43845376 kB' 'MemAvailable: 47350084 kB' 'Buffers: 2704 kB' 'Cached: 12275208 kB' 'SwapCached: 0 kB' 'Active: 9261504 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493156 kB' 'Mapped: 182480 kB' 'Shmem: 8377140 kB' 'KReclaimable: 195116 kB' 'Slab: 557988 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362872 kB' 'KernelStack: 13104 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9992872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.834 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26706120 kB' 'MemUsed: 6123764 kB' 'SwapCached: 0 kB' 'Active: 3966248 kB' 'Inactive: 154724 kB' 'Active(anon): 3805596 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3862060 kB' 'Mapped: 64456 kB' 'AnonPages: 262060 kB' 'Shmem: 3546684 kB' 'KernelStack: 7464 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103700 kB' 'Slab: 331092 kB' 'SReclaimable: 103700 kB' 'SUnreclaim: 227392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.835 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17139064 kB' 'MemUsed: 10572760 kB' 'SwapCached: 0 kB' 'Active: 5294468 kB' 'Inactive: 3351640 kB' 'Active(anon): 5060712 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8415892 kB' 'Mapped: 118064 kB' 'AnonPages: 230332 kB' 'Shmem: 4830496 kB' 'KernelStack: 5400 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91416 kB' 'Slab: 226892 kB' 'SReclaimable: 91416 kB' 'SUnreclaim: 135476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.836 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:07:42.837 node0=512 expecting 513 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:42.837 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:42.838 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:42.838 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:07:42.838 node1=513 expecting 512 00:07:42.838 08:03:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:07:42.838 00:07:42.838 real 0m1.505s 00:07:42.838 user 0m0.667s 00:07:42.838 sys 0m0.802s 00:07:42.838 08:03:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.838 08:03:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:42.838 ************************************ 00:07:42.838 END TEST odd_alloc 00:07:42.838 ************************************ 00:07:42.838 08:03:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:42.838 08:03:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:42.838 08:03:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.838 08:03:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.838 08:03:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:42.838 ************************************ 00:07:42.838 START TEST custom_alloc 00:07:42.838 ************************************ 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:42.838 08:03:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:43.770 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:43.770 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:43.770 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:43.770 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:43.770 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:43.770 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:43.770 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:43.770 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:43.770 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:43.770 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:43.770 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:43.770 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:43.770 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:43.770 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:43.770 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:43.770 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:43.770 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42801624 kB' 'MemAvailable: 46306332 kB' 'Buffers: 2704 kB' 'Cached: 12275296 kB' 'SwapCached: 0 kB' 'Active: 9260648 kB' 'Inactive: 3506364 kB' 'Active(anon): 8866240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492208 kB' 'Mapped: 182624 kB' 'Shmem: 8377228 kB' 'KReclaimable: 195116 kB' 'Slab: 557852 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362736 kB' 'KernelStack: 12848 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9993080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.032 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.033 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42801792 kB' 'MemAvailable: 46306500 kB' 'Buffers: 2704 kB' 'Cached: 12275300 kB' 'SwapCached: 0 kB' 'Active: 9261444 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493060 kB' 'Mapped: 182624 kB' 'Shmem: 8377232 kB' 'KReclaimable: 195116 kB' 'Slab: 557832 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362716 kB' 'KernelStack: 12912 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9993096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.034 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42802788 kB' 'MemAvailable: 46307496 kB' 'Buffers: 2704 kB' 'Cached: 12275300 kB' 'SwapCached: 0 kB' 'Active: 9261528 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493128 kB' 'Mapped: 182528 kB' 'Shmem: 8377232 kB' 'KReclaimable: 195116 kB' 'Slab: 557888 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362772 kB' 'KernelStack: 12912 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9993116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.035 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.036 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:44.037 nr_hugepages=1536 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:44.037 resv_hugepages=0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:44.037 surplus_hugepages=0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:44.037 anon_hugepages=0 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.037 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42804512 kB' 'MemAvailable: 46309220 kB' 'Buffers: 2704 kB' 'Cached: 12275340 kB' 'SwapCached: 0 kB' 'Active: 9260876 kB' 'Inactive: 3506364 kB' 'Active(anon): 8866468 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492400 kB' 'Mapped: 182528 kB' 'Shmem: 8377272 kB' 'KReclaimable: 195116 kB' 'Slab: 557888 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362772 kB' 'KernelStack: 12880 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9993140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.038 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26702064 kB' 'MemUsed: 6127820 kB' 'SwapCached: 0 kB' 'Active: 3966848 kB' 'Inactive: 154724 kB' 'Active(anon): 3806196 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3862104 kB' 'Mapped: 64456 kB' 'AnonPages: 262636 kB' 'Shmem: 3546728 kB' 'KernelStack: 7512 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103700 kB' 'Slab: 331056 kB' 'SReclaimable: 103700 kB' 'SUnreclaim: 227356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.039 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.298 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16102748 kB' 'MemUsed: 11609076 kB' 'SwapCached: 0 kB' 'Active: 5293736 kB' 'Inactive: 3351640 kB' 'Active(anon): 5059980 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8415940 kB' 'Mapped: 118072 kB' 'AnonPages: 229464 kB' 'Shmem: 4830544 kB' 'KernelStack: 5368 kB' 'PageTables: 3496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91416 kB' 'Slab: 226832 kB' 'SReclaimable: 91416 kB' 'SUnreclaim: 135416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.299 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:44.300 node0=512 expecting 512 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:44.300 node1=1024 expecting 1024 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:44.300 00:07:44.300 real 0m1.423s 00:07:44.300 user 0m0.609s 00:07:44.300 sys 0m0.774s 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.300 08:03:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:44.300 ************************************ 00:07:44.300 END TEST custom_alloc 00:07:44.300 ************************************ 00:07:44.300 08:03:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:44.300 08:03:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:44.300 08:03:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.300 08:03:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.300 08:03:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:44.300 ************************************ 00:07:44.300 START TEST no_shrink_alloc 00:07:44.300 ************************************ 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:44.300 08:03:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:45.675 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:45.675 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:45.675 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:45.675 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:45.675 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:45.675 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:45.675 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:45.675 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:45.675 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:45.675 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:45.675 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:45.675 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:45.675 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:45.675 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:45.675 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:45.675 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:45.675 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43836936 kB' 'MemAvailable: 47341644 kB' 'Buffers: 2704 kB' 'Cached: 12275424 kB' 'SwapCached: 0 kB' 'Active: 9261488 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492872 kB' 'Mapped: 182680 kB' 'Shmem: 8377356 kB' 'KReclaimable: 195116 kB' 'Slab: 557916 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362800 kB' 'KernelStack: 12912 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9998764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.675 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.676 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841712 kB' 'MemAvailable: 47346420 kB' 'Buffers: 2704 kB' 'Cached: 12275424 kB' 'SwapCached: 0 kB' 'Active: 9261584 kB' 'Inactive: 3506364 kB' 'Active(anon): 8867176 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492952 kB' 'Mapped: 182668 kB' 'Shmem: 8377356 kB' 'KReclaimable: 195116 kB' 'Slab: 557908 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362792 kB' 'KernelStack: 12928 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9993344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.677 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841812 kB' 'MemAvailable: 47346520 kB' 'Buffers: 2704 kB' 'Cached: 12275428 kB' 'SwapCached: 0 kB' 'Active: 9260788 kB' 'Inactive: 3506364 kB' 'Active(anon): 8866380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492244 kB' 'Mapped: 182608 kB' 'Shmem: 8377360 kB' 'KReclaimable: 195116 kB' 'Slab: 557916 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362800 kB' 'KernelStack: 12912 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9993368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.678 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.679 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:45.680 nr_hugepages=1024 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:45.680 resv_hugepages=0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:45.680 surplus_hugepages=0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:45.680 anon_hugepages=0 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43842012 kB' 'MemAvailable: 47346720 kB' 'Buffers: 2704 kB' 'Cached: 12275468 kB' 'SwapCached: 0 kB' 'Active: 9261124 kB' 'Inactive: 3506364 kB' 'Active(anon): 8866716 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492540 kB' 'Mapped: 182608 kB' 'Shmem: 8377400 kB' 'KReclaimable: 195116 kB' 'Slab: 557916 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362800 kB' 'KernelStack: 12912 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9993392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.680 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.681 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25642920 kB' 'MemUsed: 7186964 kB' 'SwapCached: 0 kB' 'Active: 3967908 kB' 'Inactive: 154724 kB' 'Active(anon): 3807256 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3862248 kB' 'Mapped: 64456 kB' 'AnonPages: 263592 kB' 'Shmem: 3546872 kB' 'KernelStack: 7528 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103700 kB' 'Slab: 331080 kB' 'SReclaimable: 103700 kB' 'SUnreclaim: 227380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.682 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:45.683 node0=1024 expecting 1024 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:45.683 08:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:47.059 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:47.059 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:47.059 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:47.059 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:47.059 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:47.059 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:47.059 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:47.059 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:47.059 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:47.059 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:07:47.059 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:07:47.059 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:07:47.059 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:07:47.059 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:07:47.059 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:07:47.059 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:07:47.059 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:07:47.059 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:47.059 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:47.059 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:47.059 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:47.059 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:47.059 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43830492 kB' 'MemAvailable: 47335200 kB' 'Buffers: 2704 kB' 'Cached: 12275560 kB' 'SwapCached: 0 kB' 'Active: 9266724 kB' 'Inactive: 3506364 kB' 'Active(anon): 8872316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497972 kB' 'Mapped: 183472 kB' 'Shmem: 8377492 kB' 'KReclaimable: 195116 kB' 'Slab: 557900 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362784 kB' 'KernelStack: 12864 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10000080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.060 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43839264 kB' 'MemAvailable: 47343972 kB' 'Buffers: 2704 kB' 'Cached: 12275576 kB' 'SwapCached: 0 kB' 'Active: 9263252 kB' 'Inactive: 3506364 kB' 'Active(anon): 8868844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494516 kB' 'Mapped: 183752 kB' 'Shmem: 8377508 kB' 'KReclaimable: 195116 kB' 'Slab: 557980 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362864 kB' 'KernelStack: 12928 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9996256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.061 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.062 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43833928 kB' 'MemAvailable: 47338636 kB' 'Buffers: 2704 kB' 'Cached: 12275580 kB' 'SwapCached: 0 kB' 'Active: 9266388 kB' 'Inactive: 3506364 kB' 'Active(anon): 8871980 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497684 kB' 'Mapped: 183744 kB' 'Shmem: 8377512 kB' 'KReclaimable: 195116 kB' 'Slab: 557980 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362864 kB' 'KernelStack: 12896 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10000120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.063 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.064 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:47.065 nr_hugepages=1024 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:47.065 resv_hugepages=0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:47.065 surplus_hugepages=0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:47.065 anon_hugepages=0 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43833392 kB' 'MemAvailable: 47338100 kB' 'Buffers: 2704 kB' 'Cached: 12275600 kB' 'SwapCached: 0 kB' 'Active: 9267044 kB' 'Inactive: 3506364 kB' 'Active(anon): 8872636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394408 kB' 'Inactive(file): 3506364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498268 kB' 'Mapped: 183328 kB' 'Shmem: 8377532 kB' 'KReclaimable: 195116 kB' 'Slab: 557972 kB' 'SReclaimable: 195116 kB' 'SUnreclaim: 362856 kB' 'KernelStack: 12880 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10000140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 13895680 kB' 'DirectMap1G: 53477376 kB' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.065 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.066 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25633180 kB' 'MemUsed: 7196704 kB' 'SwapCached: 0 kB' 'Active: 3967280 kB' 'Inactive: 154724 kB' 'Active(anon): 3806628 kB' 'Inactive(anon): 0 kB' 'Active(file): 160652 kB' 'Inactive(file): 154724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3862276 kB' 'Mapped: 64608 kB' 'AnonPages: 262932 kB' 'Shmem: 3546900 kB' 'KernelStack: 7512 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103700 kB' 'Slab: 331040 kB' 'SReclaimable: 103700 kB' 'SUnreclaim: 227340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.067 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:47.068 node0=1024 expecting 1024 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:47.068 00:07:47.068 real 0m2.847s 00:07:47.068 user 0m1.169s 00:07:47.068 sys 0m1.597s 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.068 08:03:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:47.068 ************************************ 00:07:47.068 END TEST no_shrink_alloc 00:07:47.068 ************************************ 00:07:47.068 08:03:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:47.068 08:03:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:47.068 00:07:47.068 real 0m11.385s 00:07:47.068 user 0m4.453s 00:07:47.068 sys 0m5.845s 00:07:47.068 08:03:56 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.068 08:03:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:47.068 ************************************ 00:07:47.068 END TEST hugepages 00:07:47.068 ************************************ 00:07:47.068 08:03:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:47.068 08:03:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:47.068 08:03:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.068 08:03:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.068 08:03:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:47.068 ************************************ 00:07:47.068 START TEST driver 00:07:47.068 ************************************ 00:07:47.068 08:03:56 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:47.325 * Looking for test storage... 00:07:47.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:47.325 08:03:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:47.325 08:03:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:47.325 08:03:56 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:49.850 08:03:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:49.850 08:03:59 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.850 08:03:59 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.850 08:03:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:49.850 ************************************ 00:07:49.850 START TEST guess_driver 00:07:49.850 ************************************ 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:49.850 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:49.850 Looking for driver=vfio-pci 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:49.850 08:03:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:50.781 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.037 08:04:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:51.967 08:04:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:54.491 00:07:54.491 real 0m4.773s 00:07:54.491 user 0m1.107s 00:07:54.491 sys 0m1.786s 00:07:54.491 08:04:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.491 08:04:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 END TEST guess_driver 00:07:54.491 ************************************ 00:07:54.491 08:04:03 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:54.491 00:07:54.491 real 0m7.301s 00:07:54.491 user 0m1.646s 00:07:54.491 sys 0m2.789s 00:07:54.491 08:04:03 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.491 08:04:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 END TEST driver 00:07:54.491 ************************************ 00:07:54.491 08:04:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:54.491 08:04:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:54.491 08:04:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.491 08:04:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.491 08:04:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 START TEST devices 00:07:54.491 ************************************ 00:07:54.491 08:04:04 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:54.491 * Looking for test storage... 00:07:54.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:54.491 08:04:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:54.491 08:04:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:54.491 08:04:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:54.491 08:04:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:55.863 08:04:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:07:55.863 08:04:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:55.863 08:04:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:55.863 08:04:05 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:56.121 No valid GPT data, bailing 00:07:56.121 08:04:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:56.121 08:04:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:56.121 08:04:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:56.121 08:04:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:56.121 08:04:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:56.121 08:04:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:56.121 08:04:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:56.121 08:04:05 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.121 08:04:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.121 08:04:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:56.121 ************************************ 00:07:56.121 START TEST nvme_mount 00:07:56.121 ************************************ 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:56.121 08:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:57.089 Creating new GPT entries in memory. 00:07:57.089 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:57.089 other utilities. 00:07:57.089 08:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:57.089 08:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:57.089 08:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:57.089 08:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:57.089 08:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:58.021 Creating new GPT entries in memory. 00:07:58.021 The operation has completed successfully. 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3977377 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:58.021 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:58.278 08:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.208 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:07:59.209 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:59.466 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:59.466 08:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:59.722 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:59.722 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:59.722 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:59.722 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:59.722 08:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:01.091 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:01.092 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.092 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:08:01.092 08:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:01.092 08:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:01.092 08:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.023 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:02.024 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:02.281 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:02.281 00:08:02.281 real 0m6.212s 00:08:02.281 user 0m1.390s 00:08:02.281 sys 0m2.369s 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.281 08:04:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:08:02.281 ************************************ 00:08:02.281 END TEST nvme_mount 00:08:02.281 ************************************ 00:08:02.281 08:04:11 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:02.281 08:04:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:02.281 08:04:11 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.281 08:04:11 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.281 08:04:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:02.281 ************************************ 00:08:02.281 START TEST dm_mount 00:08:02.281 ************************************ 00:08:02.281 08:04:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:08:02.281 08:04:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:02.281 08:04:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:02.281 08:04:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:02.281 08:04:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:02.282 08:04:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:03.652 Creating new GPT entries in memory. 00:08:03.652 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:03.652 other utilities. 00:08:03.652 08:04:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:03.652 08:04:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:03.652 08:04:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:03.652 08:04:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:03.652 08:04:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:08:04.582 Creating new GPT entries in memory. 00:08:04.582 The operation has completed successfully. 00:08:04.582 08:04:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:04.582 08:04:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:04.582 08:04:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:04.582 08:04:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:04.582 08:04:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:08:05.513 The operation has completed successfully. 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3979769 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:05.513 08:04:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:06.883 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:06.884 08:04:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:08:07.814 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:08.070 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:08.070 00:08:08.070 real 0m5.830s 00:08:08.070 user 0m0.989s 00:08:08.070 sys 0m1.693s 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.070 08:04:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:08.070 ************************************ 00:08:08.070 END TEST dm_mount 00:08:08.070 ************************************ 00:08:08.070 08:04:17 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:08.070 08:04:17 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:08.327 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:08.327 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:08.327 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:08.327 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:08.327 08:04:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:08.327 08:04:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:08.584 08:04:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:08.584 08:04:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:08.584 08:04:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:08.584 08:04:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:08.584 08:04:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:08.584 00:08:08.584 real 0m13.948s 00:08:08.584 user 0m3.002s 00:08:08.584 sys 0m5.110s 00:08:08.584 08:04:17 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.584 08:04:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:08.584 ************************************ 00:08:08.584 END TEST devices 00:08:08.584 ************************************ 00:08:08.584 08:04:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:08.584 00:08:08.584 real 0m43.102s 00:08:08.584 user 0m12.360s 00:08:08.584 sys 0m18.965s 00:08:08.584 08:04:17 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.584 08:04:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:08.584 ************************************ 00:08:08.584 END TEST setup.sh 00:08:08.584 ************************************ 00:08:08.584 08:04:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:08.584 08:04:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:09.513 Hugepages 00:08:09.513 node hugesize free / total 00:08:09.513 node0 1048576kB 0 / 0 00:08:09.513 node0 2048kB 2048 / 2048 00:08:09.513 node1 1048576kB 0 / 0 00:08:09.513 node1 2048kB 0 / 0 00:08:09.513 00:08:09.513 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:09.513 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:08:09.513 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:08:09.513 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:08:09.513 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:08:09.513 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:08:09.513 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:08:09.513 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:08:09.514 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:08:09.514 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:08:09.514 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:08:09.770 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:08:09.770 08:04:19 -- spdk/autotest.sh@130 -- # uname -s 00:08:09.770 08:04:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:09.770 08:04:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:09.770 08:04:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:11.138 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:11.138 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:11.138 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:12.091 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:08:12.091 08:04:21 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:13.021 08:04:22 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:13.022 08:04:22 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:13.022 08:04:22 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.022 08:04:22 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:13.022 08:04:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:13.022 08:04:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:13.022 08:04:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.022 08:04:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:13.022 08:04:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:13.022 08:04:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:13.022 08:04:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:08:13.022 08:04:22 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:14.391 Waiting for block devices as requested 00:08:14.391 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:08:14.391 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:14.391 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:14.391 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:14.648 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:14.648 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:14.648 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:14.648 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:14.905 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:14.905 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:08:14.905 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:08:14.905 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:08:15.161 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:08:15.161 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:08:15.161 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:08:15.161 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:08:15.418 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:08:15.418 08:04:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:15.418 08:04:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:08:15.418 08:04:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:08:15.418 08:04:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:15.418 08:04:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:15.418 08:04:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:15.418 08:04:24 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:08:15.418 08:04:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:15.418 08:04:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:15.418 08:04:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:15.418 08:04:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:15.418 08:04:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:15.418 08:04:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:15.418 08:04:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:15.418 08:04:24 -- common/autotest_common.sh@1557 -- # continue 00:08:15.419 08:04:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:15.419 08:04:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.419 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.419 08:04:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:15.419 08:04:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.419 08:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:15.419 08:04:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:16.789 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:16.789 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:08:16.789 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:08:17.720 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:08:17.720 08:04:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:17.720 08:04:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.720 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:17.720 08:04:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:17.720 08:04:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:08:17.720 08:04:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:08:17.720 08:04:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:08:17.720 08:04:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:08:17.720 08:04:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:08:17.720 08:04:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:17.720 08:04:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:17.720 08:04:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:17.720 08:04:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:17.720 08:04:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:17.720 08:04:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:17.720 08:04:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:08:17.720 08:04:27 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:17.720 08:04:27 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:08:17.720 08:04:27 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:08:17.720 08:04:27 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:17.720 08:04:27 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:08:17.720 08:04:27 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:08:17.720 08:04:27 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:08:17.720 08:04:27 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3984944 00:08:17.720 08:04:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:17.720 08:04:27 -- common/autotest_common.sh@1598 -- # waitforlisten 3984944 00:08:17.720 08:04:27 -- common/autotest_common.sh@829 -- # '[' -z 3984944 ']' 00:08:17.720 08:04:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.720 08:04:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.720 08:04:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.720 08:04:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.720 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 [2024-07-21 08:04:27.394955] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:17.977 [2024-07-21 08:04:27.395044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984944 ] 00:08:17.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.977 [2024-07-21 08:04:27.452715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.977 [2024-07-21 08:04:27.539542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.233 08:04:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.233 08:04:27 -- common/autotest_common.sh@862 -- # return 0 00:08:18.233 08:04:27 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:08:18.233 08:04:27 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:08:18.233 08:04:27 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:08:21.505 nvme0n1 00:08:21.505 08:04:30 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:21.505 [2024-07-21 08:04:31.101233] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:08:21.505 [2024-07-21 08:04:31.101276] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:08:21.505 request: 00:08:21.505 { 00:08:21.505 "nvme_ctrlr_name": "nvme0", 00:08:21.505 "password": "test", 00:08:21.505 "method": "bdev_nvme_opal_revert", 00:08:21.505 "req_id": 1 00:08:21.505 } 00:08:21.505 Got JSON-RPC error response 00:08:21.505 response: 00:08:21.505 { 00:08:21.505 "code": -32603, 00:08:21.505 "message": "Internal error" 00:08:21.505 } 00:08:21.505 08:04:31 -- common/autotest_common.sh@1604 -- # true 00:08:21.505 08:04:31 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:08:21.505 08:04:31 -- common/autotest_common.sh@1608 -- # killprocess 3984944 00:08:21.505 08:04:31 -- common/autotest_common.sh@948 -- # '[' -z 3984944 ']' 00:08:21.505 08:04:31 -- common/autotest_common.sh@952 -- # kill -0 3984944 00:08:21.505 08:04:31 -- common/autotest_common.sh@953 -- # uname 00:08:21.505 08:04:31 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.505 08:04:31 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3984944 00:08:21.763 08:04:31 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.763 08:04:31 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.763 08:04:31 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3984944' 00:08:21.763 killing process with pid 3984944 00:08:21.763 08:04:31 -- common/autotest_common.sh@967 -- # kill 3984944 00:08:21.763 08:04:31 -- common/autotest_common.sh@972 -- # wait 3984944 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.763 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:21.764 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:23.658 08:04:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:23.658 08:04:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:23.658 08:04:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:23.658 08:04:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:23.658 08:04:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:23.658 08:04:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.658 08:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.658 08:04:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:23.658 08:04:32 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:23.658 08:04:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.658 08:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.658 08:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.658 ************************************ 00:08:23.658 START TEST env 00:08:23.658 ************************************ 00:08:23.658 08:04:32 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:23.658 * Looking for test storage... 00:08:23.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:23.658 08:04:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:23.659 08:04:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.659 08:04:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.659 08:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:23.659 ************************************ 00:08:23.659 START TEST env_memory 00:08:23.659 ************************************ 00:08:23.659 08:04:33 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:23.659 00:08:23.659 00:08:23.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.659 http://cunit.sourceforge.net/ 00:08:23.659 00:08:23.659 00:08:23.659 Suite: memory 00:08:23.659 Test: alloc and free memory map ...[2024-07-21 08:04:33.038212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:23.659 passed 00:08:23.659 Test: mem map translation ...[2024-07-21 08:04:33.058193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:23.659 [2024-07-21 08:04:33.058215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:23.659 [2024-07-21 08:04:33.058265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:23.659 [2024-07-21 08:04:33.058277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:23.659 passed 00:08:23.659 Test: mem map registration ...[2024-07-21 08:04:33.099767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:23.659 [2024-07-21 08:04:33.099788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:23.659 passed 00:08:23.659 Test: mem map adjacent registrations ...passed 00:08:23.659 00:08:23.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.659 suites 1 1 n/a 0 0 00:08:23.659 tests 4 4 4 0 0 00:08:23.659 asserts 152 152 152 0 n/a 00:08:23.659 00:08:23.659 Elapsed time = 0.145 seconds 00:08:23.659 00:08:23.659 real 0m0.153s 00:08:23.659 user 0m0.147s 00:08:23.659 sys 0m0.006s 00:08:23.659 08:04:33 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.659 08:04:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:23.659 ************************************ 00:08:23.659 END TEST env_memory 00:08:23.659 ************************************ 00:08:23.659 08:04:33 env -- common/autotest_common.sh@1142 -- # return 0 00:08:23.659 08:04:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:23.659 08:04:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.659 08:04:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.659 08:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:08:23.659 ************************************ 00:08:23.659 START TEST env_vtophys 00:08:23.659 ************************************ 00:08:23.659 08:04:33 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:23.659 EAL: lib.eal log level changed from notice to debug 00:08:23.659 EAL: Detected lcore 0 as core 0 on socket 0 00:08:23.659 EAL: Detected lcore 1 as core 1 on socket 0 00:08:23.659 EAL: Detected lcore 2 as core 2 on socket 0 00:08:23.659 EAL: Detected lcore 3 as core 3 on socket 0 00:08:23.659 EAL: Detected lcore 4 as core 4 on socket 0 00:08:23.659 EAL: Detected lcore 5 as core 5 on socket 0 00:08:23.659 EAL: Detected lcore 6 as core 8 on socket 0 00:08:23.659 EAL: Detected lcore 7 as core 9 on socket 0 00:08:23.659 EAL: Detected lcore 8 as core 10 on socket 0 00:08:23.659 EAL: Detected lcore 9 as core 11 on socket 0 00:08:23.659 EAL: Detected lcore 10 as core 12 on socket 0 00:08:23.659 EAL: Detected lcore 11 as core 13 on socket 0 00:08:23.659 EAL: Detected lcore 12 as core 0 on socket 1 00:08:23.659 EAL: Detected lcore 13 as core 1 on socket 1 00:08:23.659 EAL: Detected lcore 14 as core 2 on socket 1 00:08:23.659 EAL: Detected lcore 15 as core 3 on socket 1 00:08:23.659 EAL: Detected lcore 16 as core 4 on socket 1 00:08:23.659 EAL: Detected lcore 17 as core 5 on socket 1 00:08:23.659 EAL: Detected lcore 18 as core 8 on socket 1 00:08:23.659 EAL: Detected lcore 19 as core 9 on socket 1 00:08:23.659 EAL: Detected lcore 20 as core 10 on socket 1 00:08:23.659 EAL: Detected lcore 21 as core 11 on socket 1 00:08:23.659 EAL: Detected lcore 22 as core 12 on socket 1 00:08:23.659 EAL: Detected lcore 23 as core 13 on socket 1 00:08:23.659 EAL: Detected lcore 24 as core 0 on socket 0 00:08:23.659 EAL: Detected lcore 25 as core 1 on socket 0 00:08:23.659 EAL: Detected lcore 26 as core 2 on socket 0 00:08:23.659 EAL: Detected lcore 27 as core 3 on socket 0 00:08:23.659 EAL: Detected lcore 28 as core 4 on socket 0 00:08:23.659 EAL: Detected lcore 29 as core 5 on socket 0 00:08:23.659 EAL: Detected lcore 30 as core 8 on socket 0 00:08:23.659 EAL: Detected lcore 31 as core 9 on socket 0 00:08:23.659 EAL: Detected lcore 32 as core 10 on socket 0 00:08:23.659 EAL: Detected lcore 33 as core 11 on socket 0 00:08:23.659 EAL: Detected lcore 34 as core 12 on socket 0 00:08:23.659 EAL: Detected lcore 35 as core 13 on socket 0 00:08:23.659 EAL: Detected lcore 36 as core 0 on socket 1 00:08:23.659 EAL: Detected lcore 37 as core 1 on socket 1 00:08:23.659 EAL: Detected lcore 38 as core 2 on socket 1 00:08:23.659 EAL: Detected lcore 39 as core 3 on socket 1 00:08:23.659 EAL: Detected lcore 40 as core 4 on socket 1 00:08:23.659 EAL: Detected lcore 41 as core 5 on socket 1 00:08:23.659 EAL: Detected lcore 42 as core 8 on socket 1 00:08:23.659 EAL: Detected lcore 43 as core 9 on socket 1 00:08:23.659 EAL: Detected lcore 44 as core 10 on socket 1 00:08:23.659 EAL: Detected lcore 45 as core 11 on socket 1 00:08:23.659 EAL: Detected lcore 46 as core 12 on socket 1 00:08:23.659 EAL: Detected lcore 47 as core 13 on socket 1 00:08:23.659 EAL: Maximum logical cores by configuration: 128 00:08:23.659 EAL: Detected CPU lcores: 48 00:08:23.659 EAL: Detected NUMA nodes: 2 00:08:23.659 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:23.659 EAL: Detected shared linkage of DPDK 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:23.659 EAL: Registered [vdev] bus. 00:08:23.659 EAL: bus.vdev log level changed from disabled to notice 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:23.659 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:23.659 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:23.659 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:23.659 EAL: No shared files mode enabled, IPC will be disabled 00:08:23.659 EAL: No shared files mode enabled, IPC is disabled 00:08:23.659 EAL: Bus pci wants IOVA as 'DC' 00:08:23.659 EAL: Bus vdev wants IOVA as 'DC' 00:08:23.659 EAL: Buses did not request a specific IOVA mode. 00:08:23.659 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:23.659 EAL: Selected IOVA mode 'VA' 00:08:23.659 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.659 EAL: Probing VFIO support... 00:08:23.659 EAL: IOMMU type 1 (Type 1) is supported 00:08:23.659 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:23.659 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:23.659 EAL: VFIO support initialized 00:08:23.659 EAL: Ask a virtual area of 0x2e000 bytes 00:08:23.659 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:23.659 EAL: Setting up physically contiguous memory... 00:08:23.659 EAL: Setting maximum number of open files to 524288 00:08:23.659 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:23.659 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:23.659 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:23.659 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:23.659 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:23.659 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.659 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:23.659 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:23.659 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.659 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:23.660 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:23.660 EAL: Ask a virtual area of 0x61000 bytes 00:08:23.660 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:23.660 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:23.660 EAL: Ask a virtual area of 0x400000000 bytes 00:08:23.660 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:23.660 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:23.660 EAL: Hugepages will be freed exactly as allocated. 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: TSC frequency is ~2700000 KHz 00:08:23.660 EAL: Main lcore 0 is ready (tid=7f5677cc4a00;cpuset=[0]) 00:08:23.660 EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.660 EAL: Restoring previous memory policy: 0 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was expanded by 2MB 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:23.660 EAL: Mem event callback 'spdk:(nil)' registered 00:08:23.660 00:08:23.660 00:08:23.660 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.660 http://cunit.sourceforge.net/ 00:08:23.660 00:08:23.660 00:08:23.660 Suite: components_suite 00:08:23.660 Test: vtophys_malloc_test ...passed 00:08:23.660 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.660 EAL: Restoring previous memory policy: 4 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was expanded by 4MB 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was shrunk by 4MB 00:08:23.660 EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.660 EAL: Restoring previous memory policy: 4 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was expanded by 6MB 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was shrunk by 6MB 00:08:23.660 EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.660 EAL: Restoring previous memory policy: 4 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was expanded by 10MB 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was shrunk by 10MB 00:08:23.660 EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.660 EAL: Restoring previous memory policy: 4 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was expanded by 18MB 00:08:23.660 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.660 EAL: request: mp_malloc_sync 00:08:23.660 EAL: No shared files mode enabled, IPC is disabled 00:08:23.660 EAL: Heap on socket 0 was shrunk by 18MB 00:08:23.660 EAL: Trying to obtain current memory policy. 00:08:23.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.916 EAL: Restoring previous memory policy: 4 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was expanded by 34MB 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was shrunk by 34MB 00:08:23.916 EAL: Trying to obtain current memory policy. 00:08:23.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.916 EAL: Restoring previous memory policy: 4 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was expanded by 66MB 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was shrunk by 66MB 00:08:23.916 EAL: Trying to obtain current memory policy. 00:08:23.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.916 EAL: Restoring previous memory policy: 4 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was expanded by 130MB 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was shrunk by 130MB 00:08:23.916 EAL: Trying to obtain current memory policy. 00:08:23.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.916 EAL: Restoring previous memory policy: 4 00:08:23.916 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.916 EAL: request: mp_malloc_sync 00:08:23.916 EAL: No shared files mode enabled, IPC is disabled 00:08:23.916 EAL: Heap on socket 0 was expanded by 258MB 00:08:24.172 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.172 EAL: request: mp_malloc_sync 00:08:24.172 EAL: No shared files mode enabled, IPC is disabled 00:08:24.172 EAL: Heap on socket 0 was shrunk by 258MB 00:08:24.172 EAL: Trying to obtain current memory policy. 00:08:24.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:24.172 EAL: Restoring previous memory policy: 4 00:08:24.172 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.172 EAL: request: mp_malloc_sync 00:08:24.172 EAL: No shared files mode enabled, IPC is disabled 00:08:24.172 EAL: Heap on socket 0 was expanded by 514MB 00:08:24.430 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.430 EAL: request: mp_malloc_sync 00:08:24.430 EAL: No shared files mode enabled, IPC is disabled 00:08:24.430 EAL: Heap on socket 0 was shrunk by 514MB 00:08:24.430 EAL: Trying to obtain current memory policy. 00:08:24.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:24.686 EAL: Restoring previous memory policy: 4 00:08:24.686 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.686 EAL: request: mp_malloc_sync 00:08:24.686 EAL: No shared files mode enabled, IPC is disabled 00:08:24.686 EAL: Heap on socket 0 was expanded by 1026MB 00:08:24.942 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.199 EAL: request: mp_malloc_sync 00:08:25.199 EAL: No shared files mode enabled, IPC is disabled 00:08:25.199 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:25.199 passed 00:08:25.199 00:08:25.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.199 suites 1 1 n/a 0 0 00:08:25.199 tests 2 2 2 0 0 00:08:25.199 asserts 497 497 497 0 n/a 00:08:25.199 00:08:25.199 Elapsed time = 1.372 seconds 00:08:25.199 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.199 EAL: request: mp_malloc_sync 00:08:25.199 EAL: No shared files mode enabled, IPC is disabled 00:08:25.199 EAL: Heap on socket 0 was shrunk by 2MB 00:08:25.199 EAL: No shared files mode enabled, IPC is disabled 00:08:25.199 EAL: No shared files mode enabled, IPC is disabled 00:08:25.199 EAL: No shared files mode enabled, IPC is disabled 00:08:25.199 00:08:25.199 real 0m1.493s 00:08:25.199 user 0m0.861s 00:08:25.199 sys 0m0.591s 00:08:25.199 08:04:34 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.199 08:04:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:25.199 ************************************ 00:08:25.199 END TEST env_vtophys 00:08:25.199 ************************************ 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1142 -- # return 0 00:08:25.199 08:04:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.199 08:04:34 env -- common/autotest_common.sh@10 -- # set +x 00:08:25.199 ************************************ 00:08:25.199 START TEST env_pci 00:08:25.199 ************************************ 00:08:25.199 08:04:34 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:25.199 00:08:25.199 00:08:25.199 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.199 http://cunit.sourceforge.net/ 00:08:25.199 00:08:25.199 00:08:25.199 Suite: pci 00:08:25.199 Test: pci_hook ...[2024-07-21 08:04:34.748368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3985843 has claimed it 00:08:25.199 EAL: Cannot find device (10000:00:01.0) 00:08:25.199 EAL: Failed to attach device on primary process 00:08:25.199 passed 00:08:25.199 00:08:25.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.199 suites 1 1 n/a 0 0 00:08:25.199 tests 1 1 1 0 0 00:08:25.199 asserts 25 25 25 0 n/a 00:08:25.199 00:08:25.199 Elapsed time = 0.020 seconds 00:08:25.199 00:08:25.199 real 0m0.031s 00:08:25.199 user 0m0.006s 00:08:25.199 sys 0m0.025s 00:08:25.199 08:04:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.199 08:04:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:25.199 ************************************ 00:08:25.199 END TEST env_pci 00:08:25.199 ************************************ 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1142 -- # return 0 00:08:25.199 08:04:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:25.199 08:04:34 env -- env/env.sh@15 -- # uname 00:08:25.199 08:04:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:25.199 08:04:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:25.199 08:04:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:25.199 08:04:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.199 08:04:34 env -- common/autotest_common.sh@10 -- # set +x 00:08:25.199 ************************************ 00:08:25.199 START TEST env_dpdk_post_init 00:08:25.199 ************************************ 00:08:25.199 08:04:34 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:25.456 EAL: Detected CPU lcores: 48 00:08:25.456 EAL: Detected NUMA nodes: 2 00:08:25.456 EAL: Detected shared linkage of DPDK 00:08:25.456 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:25.456 EAL: Selected IOVA mode 'VA' 00:08:25.456 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.456 EAL: VFIO support initialized 00:08:25.456 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:25.456 EAL: Using IOMMU type 1 (Type 1) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:08:25.456 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:08:25.714 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:08:25.714 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:08:25.714 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:08:26.304 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:08:29.574 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:08:29.574 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:08:29.574 Starting DPDK initialization... 00:08:29.574 Starting SPDK post initialization... 00:08:29.574 SPDK NVMe probe 00:08:29.574 Attaching to 0000:88:00.0 00:08:29.574 Attached to 0000:88:00.0 00:08:29.574 Cleaning up... 00:08:29.574 00:08:29.574 real 0m4.381s 00:08:29.574 user 0m3.242s 00:08:29.574 sys 0m0.194s 00:08:29.574 08:04:39 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.574 08:04:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 ************************************ 00:08:29.574 END TEST env_dpdk_post_init 00:08:29.574 ************************************ 00:08:29.831 08:04:39 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.831 08:04:39 env -- env/env.sh@26 -- # uname 00:08:29.831 08:04:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:29.831 08:04:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.831 08:04:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.831 08:04:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.831 08:04:39 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.831 ************************************ 00:08:29.831 START TEST env_mem_callbacks 00:08:29.831 ************************************ 00:08:29.831 08:04:39 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:29.831 EAL: Detected CPU lcores: 48 00:08:29.831 EAL: Detected NUMA nodes: 2 00:08:29.831 EAL: Detected shared linkage of DPDK 00:08:29.831 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.831 EAL: Selected IOVA mode 'VA' 00:08:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.831 EAL: VFIO support initialized 00:08:29.831 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:29.831 00:08:29.831 00:08:29.831 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.831 http://cunit.sourceforge.net/ 00:08:29.831 00:08:29.831 00:08:29.831 Suite: memory 00:08:29.831 Test: test ... 00:08:29.831 register 0x200000200000 2097152 00:08:29.831 malloc 3145728 00:08:29.831 register 0x200000400000 4194304 00:08:29.831 buf 0x200000500000 len 3145728 PASSED 00:08:29.831 malloc 64 00:08:29.831 buf 0x2000004fff40 len 64 PASSED 00:08:29.831 malloc 4194304 00:08:29.831 register 0x200000800000 6291456 00:08:29.831 buf 0x200000a00000 len 4194304 PASSED 00:08:29.831 free 0x200000500000 3145728 00:08:29.831 free 0x2000004fff40 64 00:08:29.831 unregister 0x200000400000 4194304 PASSED 00:08:29.831 free 0x200000a00000 4194304 00:08:29.831 unregister 0x200000800000 6291456 PASSED 00:08:29.831 malloc 8388608 00:08:29.831 register 0x200000400000 10485760 00:08:29.831 buf 0x200000600000 len 8388608 PASSED 00:08:29.832 free 0x200000600000 8388608 00:08:29.832 unregister 0x200000400000 10485760 PASSED 00:08:29.832 passed 00:08:29.832 00:08:29.832 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.832 suites 1 1 n/a 0 0 00:08:29.832 tests 1 1 1 0 0 00:08:29.832 asserts 15 15 15 0 n/a 00:08:29.832 00:08:29.832 Elapsed time = 0.005 seconds 00:08:29.832 00:08:29.832 real 0m0.042s 00:08:29.832 user 0m0.010s 00:08:29.832 sys 0m0.032s 00:08:29.832 08:04:39 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.832 08:04:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:29.832 ************************************ 00:08:29.832 END TEST env_mem_callbacks 00:08:29.832 ************************************ 00:08:29.832 08:04:39 env -- common/autotest_common.sh@1142 -- # return 0 00:08:29.832 00:08:29.832 real 0m6.384s 00:08:29.832 user 0m4.366s 00:08:29.832 sys 0m1.046s 00:08:29.832 08:04:39 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.832 08:04:39 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.832 ************************************ 00:08:29.832 END TEST env 00:08:29.832 ************************************ 00:08:29.832 08:04:39 -- common/autotest_common.sh@1142 -- # return 0 00:08:29.832 08:04:39 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:29.832 08:04:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.832 08:04:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.832 08:04:39 -- common/autotest_common.sh@10 -- # set +x 00:08:29.832 ************************************ 00:08:29.832 START TEST rpc 00:08:29.832 ************************************ 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:29.832 * Looking for test storage... 00:08:29.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:29.832 08:04:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3986493 00:08:29.832 08:04:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:29.832 08:04:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.832 08:04:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3986493 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@829 -- # '[' -z 3986493 ']' 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.832 08:04:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.832 [2024-07-21 08:04:39.459323] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:29.832 [2024-07-21 08:04:39.459403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986493 ] 00:08:30.089 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.089 [2024-07-21 08:04:39.516022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.089 [2024-07-21 08:04:39.600129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:30.089 [2024-07-21 08:04:39.600186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3986493' to capture a snapshot of events at runtime. 00:08:30.089 [2024-07-21 08:04:39.600214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.089 [2024-07-21 08:04:39.600227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.089 [2024-07-21 08:04:39.600239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3986493 for offline analysis/debug. 00:08:30.089 [2024-07-21 08:04:39.600267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.347 08:04:39 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.347 08:04:39 rpc -- common/autotest_common.sh@862 -- # return 0 00:08:30.347 08:04:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:30.347 08:04:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:30.347 08:04:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:30.347 08:04:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:30.347 08:04:39 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.347 08:04:39 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.347 08:04:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.347 ************************************ 00:08:30.347 START TEST rpc_integrity 00:08:30.347 ************************************ 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.347 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:30.347 { 00:08:30.347 "name": "Malloc0", 00:08:30.347 "aliases": [ 00:08:30.347 "b32a372d-21b9-4c0f-bb0c-5cf7fa333acf" 00:08:30.347 ], 00:08:30.347 "product_name": "Malloc disk", 00:08:30.347 "block_size": 512, 00:08:30.347 "num_blocks": 16384, 00:08:30.347 "uuid": "b32a372d-21b9-4c0f-bb0c-5cf7fa333acf", 00:08:30.347 "assigned_rate_limits": { 00:08:30.347 "rw_ios_per_sec": 0, 00:08:30.347 "rw_mbytes_per_sec": 0, 00:08:30.347 "r_mbytes_per_sec": 0, 00:08:30.347 "w_mbytes_per_sec": 0 00:08:30.347 }, 00:08:30.347 "claimed": false, 00:08:30.347 "zoned": false, 00:08:30.347 "supported_io_types": { 00:08:30.347 "read": true, 00:08:30.347 "write": true, 00:08:30.347 "unmap": true, 00:08:30.347 "flush": true, 00:08:30.347 "reset": true, 00:08:30.347 "nvme_admin": false, 00:08:30.347 "nvme_io": false, 00:08:30.347 "nvme_io_md": false, 00:08:30.347 "write_zeroes": true, 00:08:30.347 "zcopy": true, 00:08:30.347 "get_zone_info": false, 00:08:30.347 "zone_management": false, 00:08:30.347 "zone_append": false, 00:08:30.347 "compare": false, 00:08:30.347 "compare_and_write": false, 00:08:30.347 "abort": true, 00:08:30.347 "seek_hole": false, 00:08:30.347 "seek_data": false, 00:08:30.347 "copy": true, 00:08:30.347 "nvme_iov_md": false 00:08:30.347 }, 00:08:30.347 "memory_domains": [ 00:08:30.347 { 00:08:30.347 "dma_device_id": "system", 00:08:30.347 "dma_device_type": 1 00:08:30.347 }, 00:08:30.347 { 00:08:30.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.347 "dma_device_type": 2 00:08:30.347 } 00:08:30.347 ], 00:08:30.347 "driver_specific": {} 00:08:30.347 } 00:08:30.347 ]' 00:08:30.347 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:30.605 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:30.605 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:30.605 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 [2024-07-21 08:04:39.985630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:30.605 [2024-07-21 08:04:39.985695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.605 [2024-07-21 08:04:39.985718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1314af0 00:08:30.605 [2024-07-21 08:04:39.985732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.605 [2024-07-21 08:04:39.987212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.605 [2024-07-21 08:04:39.987239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:30.605 Passthru0 00:08:30.605 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:30.605 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:30.605 { 00:08:30.605 "name": "Malloc0", 00:08:30.605 "aliases": [ 00:08:30.605 "b32a372d-21b9-4c0f-bb0c-5cf7fa333acf" 00:08:30.605 ], 00:08:30.605 "product_name": "Malloc disk", 00:08:30.605 "block_size": 512, 00:08:30.605 "num_blocks": 16384, 00:08:30.605 "uuid": "b32a372d-21b9-4c0f-bb0c-5cf7fa333acf", 00:08:30.605 "assigned_rate_limits": { 00:08:30.605 "rw_ios_per_sec": 0, 00:08:30.605 "rw_mbytes_per_sec": 0, 00:08:30.605 "r_mbytes_per_sec": 0, 00:08:30.605 "w_mbytes_per_sec": 0 00:08:30.605 }, 00:08:30.605 "claimed": true, 00:08:30.605 "claim_type": "exclusive_write", 00:08:30.605 "zoned": false, 00:08:30.605 "supported_io_types": { 00:08:30.605 "read": true, 00:08:30.605 "write": true, 00:08:30.605 "unmap": true, 00:08:30.605 "flush": true, 00:08:30.605 "reset": true, 00:08:30.605 "nvme_admin": false, 00:08:30.605 "nvme_io": false, 00:08:30.605 "nvme_io_md": false, 00:08:30.605 "write_zeroes": true, 00:08:30.605 "zcopy": true, 00:08:30.605 "get_zone_info": false, 00:08:30.605 "zone_management": false, 00:08:30.605 "zone_append": false, 00:08:30.605 "compare": false, 00:08:30.605 "compare_and_write": false, 00:08:30.605 "abort": true, 00:08:30.605 "seek_hole": false, 00:08:30.605 "seek_data": false, 00:08:30.605 "copy": true, 00:08:30.605 "nvme_iov_md": false 00:08:30.605 }, 00:08:30.605 "memory_domains": [ 00:08:30.605 { 00:08:30.605 "dma_device_id": "system", 00:08:30.605 "dma_device_type": 1 00:08:30.605 }, 00:08:30.605 { 00:08:30.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.605 "dma_device_type": 2 00:08:30.605 } 00:08:30.605 ], 00:08:30.605 "driver_specific": {} 00:08:30.605 }, 00:08:30.605 { 00:08:30.605 "name": "Passthru0", 00:08:30.605 "aliases": [ 00:08:30.605 "f82aced6-9c97-5275-8415-013c3a10b08b" 00:08:30.605 ], 00:08:30.605 "product_name": "passthru", 00:08:30.605 "block_size": 512, 00:08:30.605 "num_blocks": 16384, 00:08:30.605 "uuid": "f82aced6-9c97-5275-8415-013c3a10b08b", 00:08:30.605 "assigned_rate_limits": { 00:08:30.605 "rw_ios_per_sec": 0, 00:08:30.605 "rw_mbytes_per_sec": 0, 00:08:30.605 "r_mbytes_per_sec": 0, 00:08:30.605 "w_mbytes_per_sec": 0 00:08:30.605 }, 00:08:30.605 "claimed": false, 00:08:30.605 "zoned": false, 00:08:30.605 "supported_io_types": { 00:08:30.605 "read": true, 00:08:30.605 "write": true, 00:08:30.605 "unmap": true, 00:08:30.605 "flush": true, 00:08:30.605 "reset": true, 00:08:30.605 "nvme_admin": false, 00:08:30.605 "nvme_io": false, 00:08:30.605 "nvme_io_md": false, 00:08:30.605 "write_zeroes": true, 00:08:30.605 "zcopy": true, 00:08:30.605 "get_zone_info": false, 00:08:30.605 "zone_management": false, 00:08:30.605 "zone_append": false, 00:08:30.605 "compare": false, 00:08:30.605 "compare_and_write": false, 00:08:30.605 "abort": true, 00:08:30.605 "seek_hole": false, 00:08:30.605 "seek_data": false, 00:08:30.605 "copy": true, 00:08:30.605 "nvme_iov_md": false 00:08:30.605 }, 00:08:30.605 "memory_domains": [ 00:08:30.605 { 00:08:30.605 "dma_device_id": "system", 00:08:30.605 "dma_device_type": 1 00:08:30.605 }, 00:08:30.605 { 00:08:30.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.605 "dma_device_type": 2 00:08:30.605 } 00:08:30.605 ], 00:08:30.605 "driver_specific": { 00:08:30.605 "passthru": { 00:08:30.605 "name": "Passthru0", 00:08:30.605 "base_bdev_name": "Malloc0" 00:08:30.605 } 00:08:30.605 } 00:08:30.605 } 00:08:30.605 ]' 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:30.605 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:30.605 00:08:30.605 real 0m0.225s 00:08:30.605 user 0m0.145s 00:08:30.605 sys 0m0.026s 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 ************************************ 00:08:30.605 END TEST rpc_integrity 00:08:30.605 ************************************ 00:08:30.605 08:04:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:30.605 08:04:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:30.605 08:04:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.605 08:04:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.605 08:04:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 ************************************ 00:08:30.605 START TEST rpc_plugins 00:08:30.605 ************************************ 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:30.605 { 00:08:30.605 "name": "Malloc1", 00:08:30.605 "aliases": [ 00:08:30.605 "5e30fba4-b0fb-4f76-91a3-0289665aa9bf" 00:08:30.605 ], 00:08:30.605 "product_name": "Malloc disk", 00:08:30.605 "block_size": 4096, 00:08:30.605 "num_blocks": 256, 00:08:30.605 "uuid": "5e30fba4-b0fb-4f76-91a3-0289665aa9bf", 00:08:30.605 "assigned_rate_limits": { 00:08:30.605 "rw_ios_per_sec": 0, 00:08:30.605 "rw_mbytes_per_sec": 0, 00:08:30.605 "r_mbytes_per_sec": 0, 00:08:30.605 "w_mbytes_per_sec": 0 00:08:30.605 }, 00:08:30.605 "claimed": false, 00:08:30.605 "zoned": false, 00:08:30.605 "supported_io_types": { 00:08:30.605 "read": true, 00:08:30.605 "write": true, 00:08:30.605 "unmap": true, 00:08:30.605 "flush": true, 00:08:30.605 "reset": true, 00:08:30.605 "nvme_admin": false, 00:08:30.605 "nvme_io": false, 00:08:30.605 "nvme_io_md": false, 00:08:30.605 "write_zeroes": true, 00:08:30.605 "zcopy": true, 00:08:30.605 "get_zone_info": false, 00:08:30.605 "zone_management": false, 00:08:30.605 "zone_append": false, 00:08:30.605 "compare": false, 00:08:30.605 "compare_and_write": false, 00:08:30.605 "abort": true, 00:08:30.605 "seek_hole": false, 00:08:30.605 "seek_data": false, 00:08:30.605 "copy": true, 00:08:30.605 "nvme_iov_md": false 00:08:30.605 }, 00:08:30.605 "memory_domains": [ 00:08:30.605 { 00:08:30.605 "dma_device_id": "system", 00:08:30.605 "dma_device_type": 1 00:08:30.605 }, 00:08:30.605 { 00:08:30.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.605 "dma_device_type": 2 00:08:30.605 } 00:08:30.605 ], 00:08:30.605 "driver_specific": {} 00:08:30.605 } 00:08:30.605 ]' 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:30.605 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:30.605 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:30.862 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:30.862 00:08:30.862 real 0m0.108s 00:08:30.862 user 0m0.073s 00:08:30.862 sys 0m0.009s 00:08:30.862 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.862 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:30.862 ************************************ 00:08:30.862 END TEST rpc_plugins 00:08:30.862 ************************************ 00:08:30.862 08:04:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:30.862 08:04:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:30.862 08:04:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.862 08:04:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.862 08:04:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.862 ************************************ 00:08:30.862 START TEST rpc_trace_cmd_test 00:08:30.862 ************************************ 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.862 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:30.862 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3986493", 00:08:30.862 "tpoint_group_mask": "0x8", 00:08:30.862 "iscsi_conn": { 00:08:30.862 "mask": "0x2", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "scsi": { 00:08:30.862 "mask": "0x4", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "bdev": { 00:08:30.862 "mask": "0x8", 00:08:30.862 "tpoint_mask": "0xffffffffffffffff" 00:08:30.862 }, 00:08:30.862 "nvmf_rdma": { 00:08:30.862 "mask": "0x10", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "nvmf_tcp": { 00:08:30.862 "mask": "0x20", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "ftl": { 00:08:30.862 "mask": "0x40", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "blobfs": { 00:08:30.862 "mask": "0x80", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "dsa": { 00:08:30.862 "mask": "0x200", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "thread": { 00:08:30.862 "mask": "0x400", 00:08:30.862 "tpoint_mask": "0x0" 00:08:30.862 }, 00:08:30.862 "nvme_pcie": { 00:08:30.863 "mask": "0x800", 00:08:30.863 "tpoint_mask": "0x0" 00:08:30.863 }, 00:08:30.863 "iaa": { 00:08:30.863 "mask": "0x1000", 00:08:30.863 "tpoint_mask": "0x0" 00:08:30.863 }, 00:08:30.863 "nvme_tcp": { 00:08:30.863 "mask": "0x2000", 00:08:30.863 "tpoint_mask": "0x0" 00:08:30.863 }, 00:08:30.863 "bdev_nvme": { 00:08:30.863 "mask": "0x4000", 00:08:30.863 "tpoint_mask": "0x0" 00:08:30.863 }, 00:08:30.863 "sock": { 00:08:30.863 "mask": "0x8000", 00:08:30.863 "tpoint_mask": "0x0" 00:08:30.863 } 00:08:30.863 }' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:30.863 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:31.120 08:04:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:31.120 00:08:31.120 real 0m0.193s 00:08:31.120 user 0m0.173s 00:08:31.120 sys 0m0.012s 00:08:31.120 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.120 08:04:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 ************************************ 00:08:31.120 END TEST rpc_trace_cmd_test 00:08:31.120 ************************************ 00:08:31.120 08:04:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:31.120 08:04:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:31.120 08:04:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:31.120 08:04:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:31.120 08:04:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.120 08:04:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.120 08:04:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 ************************************ 00:08:31.120 START TEST rpc_daemon_integrity 00:08:31.120 ************************************ 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:31.120 { 00:08:31.120 "name": "Malloc2", 00:08:31.120 "aliases": [ 00:08:31.120 "e531726c-8655-4560-aef7-bd0e0bc0a11d" 00:08:31.120 ], 00:08:31.120 "product_name": "Malloc disk", 00:08:31.120 "block_size": 512, 00:08:31.120 "num_blocks": 16384, 00:08:31.120 "uuid": "e531726c-8655-4560-aef7-bd0e0bc0a11d", 00:08:31.120 "assigned_rate_limits": { 00:08:31.120 "rw_ios_per_sec": 0, 00:08:31.120 "rw_mbytes_per_sec": 0, 00:08:31.120 "r_mbytes_per_sec": 0, 00:08:31.120 "w_mbytes_per_sec": 0 00:08:31.120 }, 00:08:31.120 "claimed": false, 00:08:31.120 "zoned": false, 00:08:31.120 "supported_io_types": { 00:08:31.120 "read": true, 00:08:31.120 "write": true, 00:08:31.120 "unmap": true, 00:08:31.120 "flush": true, 00:08:31.120 "reset": true, 00:08:31.120 "nvme_admin": false, 00:08:31.120 "nvme_io": false, 00:08:31.120 "nvme_io_md": false, 00:08:31.120 "write_zeroes": true, 00:08:31.120 "zcopy": true, 00:08:31.120 "get_zone_info": false, 00:08:31.120 "zone_management": false, 00:08:31.120 "zone_append": false, 00:08:31.120 "compare": false, 00:08:31.120 "compare_and_write": false, 00:08:31.120 "abort": true, 00:08:31.120 "seek_hole": false, 00:08:31.120 "seek_data": false, 00:08:31.120 "copy": true, 00:08:31.120 "nvme_iov_md": false 00:08:31.120 }, 00:08:31.120 "memory_domains": [ 00:08:31.120 { 00:08:31.120 "dma_device_id": "system", 00:08:31.120 "dma_device_type": 1 00:08:31.120 }, 00:08:31.120 { 00:08:31.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.120 "dma_device_type": 2 00:08:31.120 } 00:08:31.120 ], 00:08:31.120 "driver_specific": {} 00:08:31.120 } 00:08:31.120 ]' 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.120 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.120 [2024-07-21 08:04:40.655597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:31.120 [2024-07-21 08:04:40.655664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.121 [2024-07-21 08:04:40.655687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11643d0 00:08:31.121 [2024-07-21 08:04:40.655701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.121 [2024-07-21 08:04:40.657048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.121 [2024-07-21 08:04:40.657076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:31.121 Passthru0 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:31.121 { 00:08:31.121 "name": "Malloc2", 00:08:31.121 "aliases": [ 00:08:31.121 "e531726c-8655-4560-aef7-bd0e0bc0a11d" 00:08:31.121 ], 00:08:31.121 "product_name": "Malloc disk", 00:08:31.121 "block_size": 512, 00:08:31.121 "num_blocks": 16384, 00:08:31.121 "uuid": "e531726c-8655-4560-aef7-bd0e0bc0a11d", 00:08:31.121 "assigned_rate_limits": { 00:08:31.121 "rw_ios_per_sec": 0, 00:08:31.121 "rw_mbytes_per_sec": 0, 00:08:31.121 "r_mbytes_per_sec": 0, 00:08:31.121 "w_mbytes_per_sec": 0 00:08:31.121 }, 00:08:31.121 "claimed": true, 00:08:31.121 "claim_type": "exclusive_write", 00:08:31.121 "zoned": false, 00:08:31.121 "supported_io_types": { 00:08:31.121 "read": true, 00:08:31.121 "write": true, 00:08:31.121 "unmap": true, 00:08:31.121 "flush": true, 00:08:31.121 "reset": true, 00:08:31.121 "nvme_admin": false, 00:08:31.121 "nvme_io": false, 00:08:31.121 "nvme_io_md": false, 00:08:31.121 "write_zeroes": true, 00:08:31.121 "zcopy": true, 00:08:31.121 "get_zone_info": false, 00:08:31.121 "zone_management": false, 00:08:31.121 "zone_append": false, 00:08:31.121 "compare": false, 00:08:31.121 "compare_and_write": false, 00:08:31.121 "abort": true, 00:08:31.121 "seek_hole": false, 00:08:31.121 "seek_data": false, 00:08:31.121 "copy": true, 00:08:31.121 "nvme_iov_md": false 00:08:31.121 }, 00:08:31.121 "memory_domains": [ 00:08:31.121 { 00:08:31.121 "dma_device_id": "system", 00:08:31.121 "dma_device_type": 1 00:08:31.121 }, 00:08:31.121 { 00:08:31.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.121 "dma_device_type": 2 00:08:31.121 } 00:08:31.121 ], 00:08:31.121 "driver_specific": {} 00:08:31.121 }, 00:08:31.121 { 00:08:31.121 "name": "Passthru0", 00:08:31.121 "aliases": [ 00:08:31.121 "b74da4c5-116a-5f09-a7ec-1ff08ee39b1f" 00:08:31.121 ], 00:08:31.121 "product_name": "passthru", 00:08:31.121 "block_size": 512, 00:08:31.121 "num_blocks": 16384, 00:08:31.121 "uuid": "b74da4c5-116a-5f09-a7ec-1ff08ee39b1f", 00:08:31.121 "assigned_rate_limits": { 00:08:31.121 "rw_ios_per_sec": 0, 00:08:31.121 "rw_mbytes_per_sec": 0, 00:08:31.121 "r_mbytes_per_sec": 0, 00:08:31.121 "w_mbytes_per_sec": 0 00:08:31.121 }, 00:08:31.121 "claimed": false, 00:08:31.121 "zoned": false, 00:08:31.121 "supported_io_types": { 00:08:31.121 "read": true, 00:08:31.121 "write": true, 00:08:31.121 "unmap": true, 00:08:31.121 "flush": true, 00:08:31.121 "reset": true, 00:08:31.121 "nvme_admin": false, 00:08:31.121 "nvme_io": false, 00:08:31.121 "nvme_io_md": false, 00:08:31.121 "write_zeroes": true, 00:08:31.121 "zcopy": true, 00:08:31.121 "get_zone_info": false, 00:08:31.121 "zone_management": false, 00:08:31.121 "zone_append": false, 00:08:31.121 "compare": false, 00:08:31.121 "compare_and_write": false, 00:08:31.121 "abort": true, 00:08:31.121 "seek_hole": false, 00:08:31.121 "seek_data": false, 00:08:31.121 "copy": true, 00:08:31.121 "nvme_iov_md": false 00:08:31.121 }, 00:08:31.121 "memory_domains": [ 00:08:31.121 { 00:08:31.121 "dma_device_id": "system", 00:08:31.121 "dma_device_type": 1 00:08:31.121 }, 00:08:31.121 { 00:08:31.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.121 "dma_device_type": 2 00:08:31.121 } 00:08:31.121 ], 00:08:31.121 "driver_specific": { 00:08:31.121 "passthru": { 00:08:31.121 "name": "Passthru0", 00:08:31.121 "base_bdev_name": "Malloc2" 00:08:31.121 } 00:08:31.121 } 00:08:31.121 } 00:08:31.121 ]' 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:31.121 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:31.377 08:04:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:31.377 00:08:31.377 real 0m0.225s 00:08:31.377 user 0m0.149s 00:08:31.377 sys 0m0.023s 00:08:31.377 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.377 08:04:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:31.377 ************************************ 00:08:31.377 END TEST rpc_daemon_integrity 00:08:31.377 ************************************ 00:08:31.377 08:04:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:31.378 08:04:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:31.378 08:04:40 rpc -- rpc/rpc.sh@84 -- # killprocess 3986493 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@948 -- # '[' -z 3986493 ']' 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@952 -- # kill -0 3986493 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@953 -- # uname 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3986493 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3986493' 00:08:31.378 killing process with pid 3986493 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@967 -- # kill 3986493 00:08:31.378 08:04:40 rpc -- common/autotest_common.sh@972 -- # wait 3986493 00:08:31.635 00:08:31.635 real 0m1.845s 00:08:31.635 user 0m2.344s 00:08:31.635 sys 0m0.560s 00:08:31.635 08:04:41 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.635 08:04:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.635 ************************************ 00:08:31.635 END TEST rpc 00:08:31.635 ************************************ 00:08:31.635 08:04:41 -- common/autotest_common.sh@1142 -- # return 0 00:08:31.635 08:04:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:31.635 08:04:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.635 08:04:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.635 08:04:41 -- common/autotest_common.sh@10 -- # set +x 00:08:31.635 ************************************ 00:08:31.635 START TEST skip_rpc 00:08:31.635 ************************************ 00:08:31.635 08:04:41 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:31.892 * Looking for test storage... 00:08:31.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:31.892 08:04:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:31.892 08:04:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:31.892 08:04:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:31.892 08:04:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.892 08:04:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.892 08:04:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.892 ************************************ 00:08:31.892 START TEST skip_rpc 00:08:31.892 ************************************ 00:08:31.892 08:04:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:08:31.892 08:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3986932 00:08:31.892 08:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:31.892 08:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.892 08:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:31.892 [2024-07-21 08:04:41.374331] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:31.892 [2024-07-21 08:04:41.374432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986932 ] 00:08:31.892 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.892 [2024-07-21 08:04:41.435787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.148 [2024-07-21 08:04:41.525724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3986932 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3986932 ']' 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3986932 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3986932 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3986932' 00:08:37.401 killing process with pid 3986932 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3986932 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3986932 00:08:37.401 00:08:37.401 real 0m5.439s 00:08:37.401 user 0m5.124s 00:08:37.401 sys 0m0.324s 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.401 08:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 ************************************ 00:08:37.401 END TEST skip_rpc 00:08:37.401 ************************************ 00:08:37.401 08:04:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:37.401 08:04:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:37.401 08:04:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.401 08:04:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.401 08:04:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 ************************************ 00:08:37.401 START TEST skip_rpc_with_json 00:08:37.401 ************************************ 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3987623 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3987623 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3987623 ']' 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.401 08:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.401 [2024-07-21 08:04:46.862906] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:37.401 [2024-07-21 08:04:46.863017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987623 ] 00:08:37.401 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.401 [2024-07-21 08:04:46.924552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.401 [2024-07-21 08:04:47.013318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 [2024-07-21 08:04:47.274019] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:37.658 request: 00:08:37.658 { 00:08:37.658 "trtype": "tcp", 00:08:37.658 "method": "nvmf_get_transports", 00:08:37.658 "req_id": 1 00:08:37.658 } 00:08:37.658 Got JSON-RPC error response 00:08:37.658 response: 00:08:37.658 { 00:08:37.658 "code": -19, 00:08:37.658 "message": "No such device" 00:08:37.658 } 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.658 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 [2024-07-21 08:04:47.282128] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.659 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.659 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:37.659 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.659 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.916 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.916 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:37.916 { 00:08:37.916 "subsystems": [ 00:08:37.916 { 00:08:37.916 "subsystem": "vfio_user_target", 00:08:37.916 "config": null 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "keyring", 00:08:37.916 "config": [] 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "iobuf", 00:08:37.916 "config": [ 00:08:37.916 { 00:08:37.916 "method": "iobuf_set_options", 00:08:37.916 "params": { 00:08:37.916 "small_pool_count": 8192, 00:08:37.916 "large_pool_count": 1024, 00:08:37.916 "small_bufsize": 8192, 00:08:37.916 "large_bufsize": 135168 00:08:37.916 } 00:08:37.916 } 00:08:37.916 ] 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "sock", 00:08:37.916 "config": [ 00:08:37.916 { 00:08:37.916 "method": "sock_set_default_impl", 00:08:37.916 "params": { 00:08:37.916 "impl_name": "posix" 00:08:37.916 } 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "method": "sock_impl_set_options", 00:08:37.916 "params": { 00:08:37.916 "impl_name": "ssl", 00:08:37.916 "recv_buf_size": 4096, 00:08:37.916 "send_buf_size": 4096, 00:08:37.916 "enable_recv_pipe": true, 00:08:37.916 "enable_quickack": false, 00:08:37.916 "enable_placement_id": 0, 00:08:37.916 "enable_zerocopy_send_server": true, 00:08:37.916 "enable_zerocopy_send_client": false, 00:08:37.916 "zerocopy_threshold": 0, 00:08:37.916 "tls_version": 0, 00:08:37.916 "enable_ktls": false 00:08:37.916 } 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "method": "sock_impl_set_options", 00:08:37.916 "params": { 00:08:37.916 "impl_name": "posix", 00:08:37.916 "recv_buf_size": 2097152, 00:08:37.916 "send_buf_size": 2097152, 00:08:37.916 "enable_recv_pipe": true, 00:08:37.916 "enable_quickack": false, 00:08:37.916 "enable_placement_id": 0, 00:08:37.916 "enable_zerocopy_send_server": true, 00:08:37.916 "enable_zerocopy_send_client": false, 00:08:37.916 "zerocopy_threshold": 0, 00:08:37.916 "tls_version": 0, 00:08:37.916 "enable_ktls": false 00:08:37.916 } 00:08:37.916 } 00:08:37.916 ] 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "vmd", 00:08:37.916 "config": [] 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "accel", 00:08:37.916 "config": [ 00:08:37.916 { 00:08:37.916 "method": "accel_set_options", 00:08:37.916 "params": { 00:08:37.916 "small_cache_size": 128, 00:08:37.916 "large_cache_size": 16, 00:08:37.916 "task_count": 2048, 00:08:37.916 "sequence_count": 2048, 00:08:37.916 "buf_count": 2048 00:08:37.916 } 00:08:37.916 } 00:08:37.916 ] 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "subsystem": "bdev", 00:08:37.916 "config": [ 00:08:37.916 { 00:08:37.916 "method": "bdev_set_options", 00:08:37.916 "params": { 00:08:37.916 "bdev_io_pool_size": 65535, 00:08:37.916 "bdev_io_cache_size": 256, 00:08:37.916 "bdev_auto_examine": true, 00:08:37.916 "iobuf_small_cache_size": 128, 00:08:37.916 "iobuf_large_cache_size": 16 00:08:37.916 } 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "method": "bdev_raid_set_options", 00:08:37.916 "params": { 00:08:37.916 "process_window_size_kb": 1024, 00:08:37.916 "process_max_bandwidth_mb_sec": 0 00:08:37.916 } 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "method": "bdev_iscsi_set_options", 00:08:37.916 "params": { 00:08:37.916 "timeout_sec": 30 00:08:37.916 } 00:08:37.916 }, 00:08:37.916 { 00:08:37.916 "method": "bdev_nvme_set_options", 00:08:37.916 "params": { 00:08:37.916 "action_on_timeout": "none", 00:08:37.916 "timeout_us": 0, 00:08:37.916 "timeout_admin_us": 0, 00:08:37.916 "keep_alive_timeout_ms": 10000, 00:08:37.916 "arbitration_burst": 0, 00:08:37.916 "low_priority_weight": 0, 00:08:37.916 "medium_priority_weight": 0, 00:08:37.916 "high_priority_weight": 0, 00:08:37.916 "nvme_adminq_poll_period_us": 10000, 00:08:37.916 "nvme_ioq_poll_period_us": 0, 00:08:37.916 "io_queue_requests": 0, 00:08:37.916 "delay_cmd_submit": true, 00:08:37.916 "transport_retry_count": 4, 00:08:37.916 "bdev_retry_count": 3, 00:08:37.916 "transport_ack_timeout": 0, 00:08:37.916 "ctrlr_loss_timeout_sec": 0, 00:08:37.916 "reconnect_delay_sec": 0, 00:08:37.916 "fast_io_fail_timeout_sec": 0, 00:08:37.916 "disable_auto_failback": false, 00:08:37.916 "generate_uuids": false, 00:08:37.916 "transport_tos": 0, 00:08:37.916 "nvme_error_stat": false, 00:08:37.916 "rdma_srq_size": 0, 00:08:37.916 "io_path_stat": false, 00:08:37.916 "allow_accel_sequence": false, 00:08:37.916 "rdma_max_cq_size": 0, 00:08:37.916 "rdma_cm_event_timeout_ms": 0, 00:08:37.916 "dhchap_digests": [ 00:08:37.916 "sha256", 00:08:37.916 "sha384", 00:08:37.916 "sha512" 00:08:37.916 ], 00:08:37.916 "dhchap_dhgroups": [ 00:08:37.916 "null", 00:08:37.916 "ffdhe2048", 00:08:37.916 "ffdhe3072", 00:08:37.916 "ffdhe4096", 00:08:37.916 "ffdhe6144", 00:08:37.916 "ffdhe8192" 00:08:37.916 ] 00:08:37.917 } 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "method": "bdev_nvme_set_hotplug", 00:08:37.917 "params": { 00:08:37.917 "period_us": 100000, 00:08:37.917 "enable": false 00:08:37.917 } 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "method": "bdev_wait_for_examine" 00:08:37.917 } 00:08:37.917 ] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "scsi", 00:08:37.917 "config": null 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "scheduler", 00:08:37.917 "config": [ 00:08:37.917 { 00:08:37.917 "method": "framework_set_scheduler", 00:08:37.917 "params": { 00:08:37.917 "name": "static" 00:08:37.917 } 00:08:37.917 } 00:08:37.917 ] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "vhost_scsi", 00:08:37.917 "config": [] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "vhost_blk", 00:08:37.917 "config": [] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "ublk", 00:08:37.917 "config": [] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "nbd", 00:08:37.917 "config": [] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "nvmf", 00:08:37.917 "config": [ 00:08:37.917 { 00:08:37.917 "method": "nvmf_set_config", 00:08:37.917 "params": { 00:08:37.917 "discovery_filter": "match_any", 00:08:37.917 "admin_cmd_passthru": { 00:08:37.917 "identify_ctrlr": false 00:08:37.917 } 00:08:37.917 } 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "method": "nvmf_set_max_subsystems", 00:08:37.917 "params": { 00:08:37.917 "max_subsystems": 1024 00:08:37.917 } 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "method": "nvmf_set_crdt", 00:08:37.917 "params": { 00:08:37.917 "crdt1": 0, 00:08:37.917 "crdt2": 0, 00:08:37.917 "crdt3": 0 00:08:37.917 } 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "method": "nvmf_create_transport", 00:08:37.917 "params": { 00:08:37.917 "trtype": "TCP", 00:08:37.917 "max_queue_depth": 128, 00:08:37.917 "max_io_qpairs_per_ctrlr": 127, 00:08:37.917 "in_capsule_data_size": 4096, 00:08:37.917 "max_io_size": 131072, 00:08:37.917 "io_unit_size": 131072, 00:08:37.917 "max_aq_depth": 128, 00:08:37.917 "num_shared_buffers": 511, 00:08:37.917 "buf_cache_size": 4294967295, 00:08:37.917 "dif_insert_or_strip": false, 00:08:37.917 "zcopy": false, 00:08:37.917 "c2h_success": true, 00:08:37.917 "sock_priority": 0, 00:08:37.917 "abort_timeout_sec": 1, 00:08:37.917 "ack_timeout": 0, 00:08:37.917 "data_wr_pool_size": 0 00:08:37.917 } 00:08:37.917 } 00:08:37.917 ] 00:08:37.917 }, 00:08:37.917 { 00:08:37.917 "subsystem": "iscsi", 00:08:37.917 "config": [ 00:08:37.917 { 00:08:37.917 "method": "iscsi_set_options", 00:08:37.917 "params": { 00:08:37.917 "node_base": "iqn.2016-06.io.spdk", 00:08:37.917 "max_sessions": 128, 00:08:37.917 "max_connections_per_session": 2, 00:08:37.917 "max_queue_depth": 64, 00:08:37.917 "default_time2wait": 2, 00:08:37.917 "default_time2retain": 20, 00:08:37.917 "first_burst_length": 8192, 00:08:37.917 "immediate_data": true, 00:08:37.917 "allow_duplicated_isid": false, 00:08:37.917 "error_recovery_level": 0, 00:08:37.917 "nop_timeout": 60, 00:08:37.917 "nop_in_interval": 30, 00:08:37.917 "disable_chap": false, 00:08:37.917 "require_chap": false, 00:08:37.917 "mutual_chap": false, 00:08:37.917 "chap_group": 0, 00:08:37.917 "max_large_datain_per_connection": 64, 00:08:37.917 "max_r2t_per_connection": 4, 00:08:37.917 "pdu_pool_size": 36864, 00:08:37.917 "immediate_data_pool_size": 16384, 00:08:37.917 "data_out_pool_size": 2048 00:08:37.917 } 00:08:37.917 } 00:08:37.917 ] 00:08:37.917 } 00:08:37.917 ] 00:08:37.917 } 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3987623 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3987623 ']' 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3987623 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3987623 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3987623' 00:08:37.917 killing process with pid 3987623 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3987623 00:08:37.917 08:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3987623 00:08:38.481 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3987761 00:08:38.481 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:38.481 08:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3987761 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3987761 ']' 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3987761 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3987761 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3987761' 00:08:43.737 killing process with pid 3987761 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3987761 00:08:43.737 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3987761 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:43.737 00:08:43.737 real 0m6.466s 00:08:43.737 user 0m6.068s 00:08:43.737 sys 0m0.682s 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.737 ************************************ 00:08:43.737 END TEST skip_rpc_with_json 00:08:43.737 ************************************ 00:08:43.737 08:04:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:43.737 08:04:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:43.737 08:04:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.737 08:04:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.737 08:04:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.737 ************************************ 00:08:43.737 START TEST skip_rpc_with_delay 00:08:43.737 ************************************ 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:43.737 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.995 [2024-07-21 08:04:53.377387] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:43.995 [2024-07-21 08:04:53.377502] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.995 00:08:43.995 real 0m0.069s 00:08:43.995 user 0m0.041s 00:08:43.995 sys 0m0.027s 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.995 08:04:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 ************************************ 00:08:43.995 END TEST skip_rpc_with_delay 00:08:43.995 ************************************ 00:08:43.995 08:04:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:43.995 08:04:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:43.995 08:04:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:43.995 08:04:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:43.995 08:04:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.995 08:04:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.995 08:04:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 ************************************ 00:08:43.995 START TEST exit_on_failed_rpc_init 00:08:43.995 ************************************ 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3988473 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3988473 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3988473 ']' 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.995 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 [2024-07-21 08:04:53.486036] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:43.995 [2024-07-21 08:04:53.486131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988473 ] 00:08:43.995 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.995 [2024-07-21 08:04:53.547164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.253 [2024-07-21 08:04:53.643449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:44.511 08:04:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.511 [2024-07-21 08:04:53.948666] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:44.511 [2024-07-21 08:04:53.948762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988490 ] 00:08:44.511 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.511 [2024-07-21 08:04:54.009501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.511 [2024-07-21 08:04:54.102884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.511 [2024-07-21 08:04:54.103012] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:44.511 [2024-07-21 08:04:54.103034] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:44.511 [2024-07-21 08:04:54.103048] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3988473 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3988473 ']' 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3988473 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3988473 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3988473' 00:08:44.770 killing process with pid 3988473 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3988473 00:08:44.770 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3988473 00:08:45.028 00:08:45.028 real 0m1.183s 00:08:45.028 user 0m1.302s 00:08:45.028 sys 0m0.463s 00:08:45.028 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.028 08:04:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:45.028 ************************************ 00:08:45.028 END TEST exit_on_failed_rpc_init 00:08:45.028 ************************************ 00:08:45.028 08:04:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:45.028 08:04:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:45.028 00:08:45.028 real 0m13.394s 00:08:45.028 user 0m12.628s 00:08:45.028 sys 0m1.656s 00:08:45.028 08:04:54 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.028 08:04:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.028 ************************************ 00:08:45.028 END TEST skip_rpc 00:08:45.028 ************************************ 00:08:45.285 08:04:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:45.285 08:04:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:45.285 08:04:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.285 08:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.285 08:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:45.285 ************************************ 00:08:45.285 START TEST rpc_client 00:08:45.285 ************************************ 00:08:45.285 08:04:54 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:45.285 * Looking for test storage... 00:08:45.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:45.285 08:04:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:45.285 OK 00:08:45.285 08:04:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:45.285 00:08:45.285 real 0m0.071s 00:08:45.285 user 0m0.029s 00:08:45.285 sys 0m0.047s 00:08:45.285 08:04:54 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.285 08:04:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:45.285 ************************************ 00:08:45.285 END TEST rpc_client 00:08:45.285 ************************************ 00:08:45.285 08:04:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:45.285 08:04:54 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:45.285 08:04:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.285 08:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.285 08:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:45.285 ************************************ 00:08:45.285 START TEST json_config 00:08:45.285 ************************************ 00:08:45.285 08:04:54 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:45.285 08:04:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.285 08:04:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.285 08:04:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.285 08:04:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.285 08:04:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.285 08:04:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.285 08:04:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.285 08:04:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.285 08:04:54 json_config -- paths/export.sh@5 -- # export PATH 00:08:45.286 08:04:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@47 -- # : 0 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.286 08:04:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:08:45.286 INFO: JSON configuration test init 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.286 08:04:54 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:08:45.286 08:04:54 json_config -- json_config/common.sh@9 -- # local app=target 00:08:45.286 08:04:54 json_config -- json_config/common.sh@10 -- # shift 00:08:45.286 08:04:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:45.286 08:04:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:45.286 08:04:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:45.286 08:04:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.286 08:04:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.286 08:04:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3988731 00:08:45.286 08:04:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:45.286 08:04:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:45.286 Waiting for target to run... 00:08:45.286 08:04:54 json_config -- json_config/common.sh@25 -- # waitforlisten 3988731 /var/tmp/spdk_tgt.sock 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 3988731 ']' 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:45.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.286 08:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.286 [2024-07-21 08:04:54.911238] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:45.286 [2024-07-21 08:04:54.911335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988731 ] 00:08:45.561 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.819 [2024-07-21 08:04:55.257511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.819 [2024-07-21 08:04:55.320930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:46.382 08:04:55 json_config -- json_config/common.sh@26 -- # echo '' 00:08:46.382 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.382 08:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:46.382 08:04:55 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:08:46.382 08:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:49.695 08:04:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@51 -- # sort 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@59 -- # return 0 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.695 08:04:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:08:49.695 08:04:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:49.695 08:04:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:49.952 MallocForNvmf0 00:08:49.952 08:04:59 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:49.952 08:04:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:50.209 MallocForNvmf1 00:08:50.209 08:04:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:50.209 08:04:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:50.465 [2024-07-21 08:05:00.024988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.465 08:05:00 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.465 08:05:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.721 08:05:00 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:50.721 08:05:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:50.977 08:05:00 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:50.977 08:05:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:51.234 08:05:00 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:51.234 08:05:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:51.491 [2024-07-21 08:05:01.004175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:51.492 08:05:01 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:08:51.492 08:05:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.492 08:05:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:51.492 08:05:01 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:08:51.492 08:05:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.492 08:05:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:51.492 08:05:01 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:08:51.492 08:05:01 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:51.492 08:05:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:51.750 MallocBdevForConfigChangeCheck 00:08:51.750 08:05:01 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:08:51.750 08:05:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.750 08:05:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 08:05:01 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:08:51.750 08:05:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:52.314 08:05:01 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:08:52.314 INFO: shutting down applications... 00:08:52.314 08:05:01 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:08:52.314 08:05:01 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:08:52.314 08:05:01 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:08:52.314 08:05:01 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:54.211 Calling clear_iscsi_subsystem 00:08:54.211 Calling clear_nvmf_subsystem 00:08:54.211 Calling clear_nbd_subsystem 00:08:54.211 Calling clear_ublk_subsystem 00:08:54.211 Calling clear_vhost_blk_subsystem 00:08:54.211 Calling clear_vhost_scsi_subsystem 00:08:54.211 Calling clear_bdev_subsystem 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@347 -- # count=100 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@349 -- # break 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:08:54.211 08:05:03 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:08:54.211 08:05:03 json_config -- json_config/common.sh@31 -- # local app=target 00:08:54.211 08:05:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:54.211 08:05:03 json_config -- json_config/common.sh@35 -- # [[ -n 3988731 ]] 00:08:54.211 08:05:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3988731 00:08:54.211 08:05:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:54.211 08:05:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:54.211 08:05:03 json_config -- json_config/common.sh@41 -- # kill -0 3988731 00:08:54.211 08:05:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:54.777 08:05:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:54.777 08:05:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:54.777 08:05:04 json_config -- json_config/common.sh@41 -- # kill -0 3988731 00:08:54.777 08:05:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:54.777 08:05:04 json_config -- json_config/common.sh@43 -- # break 00:08:54.777 08:05:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:54.777 08:05:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:54.777 SPDK target shutdown done 00:08:54.777 08:05:04 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:08:54.777 INFO: relaunching applications... 00:08:54.777 08:05:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:54.777 08:05:04 json_config -- json_config/common.sh@9 -- # local app=target 00:08:54.777 08:05:04 json_config -- json_config/common.sh@10 -- # shift 00:08:54.777 08:05:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:54.777 08:05:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:54.777 08:05:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:54.777 08:05:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:54.777 08:05:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:54.777 08:05:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3989931 00:08:54.777 08:05:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:54.777 08:05:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:54.777 Waiting for target to run... 00:08:54.777 08:05:04 json_config -- json_config/common.sh@25 -- # waitforlisten 3989931 /var/tmp/spdk_tgt.sock 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 3989931 ']' 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:54.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.777 08:05:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.777 [2024-07-21 08:05:04.264646] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:08:54.777 [2024-07-21 08:05:04.264742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989931 ] 00:08:54.777 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.343 [2024-07-21 08:05:04.788350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.343 [2024-07-21 08:05:04.870334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.632 [2024-07-21 08:05:07.902289] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.632 [2024-07-21 08:05:07.934762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:58.632 08:05:07 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.632 08:05:07 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:58.632 08:05:07 json_config -- json_config/common.sh@26 -- # echo '' 00:08:58.632 00:08:58.632 08:05:07 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:08:58.632 08:05:07 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:58.632 INFO: Checking if target configuration is the same... 00:08:58.632 08:05:07 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:58.632 08:05:07 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:08:58.632 08:05:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:58.632 + '[' 2 -ne 2 ']' 00:08:58.632 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:58.632 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:58.632 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:58.632 +++ basename /dev/fd/62 00:08:58.632 ++ mktemp /tmp/62.XXX 00:08:58.632 + tmp_file_1=/tmp/62.wyn 00:08:58.632 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:58.632 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:58.632 + tmp_file_2=/tmp/spdk_tgt_config.json.G2x 00:08:58.632 + ret=0 00:08:58.632 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:58.889 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:58.889 + diff -u /tmp/62.wyn /tmp/spdk_tgt_config.json.G2x 00:08:58.889 + echo 'INFO: JSON config files are the same' 00:08:58.889 INFO: JSON config files are the same 00:08:58.889 + rm /tmp/62.wyn /tmp/spdk_tgt_config.json.G2x 00:08:58.889 + exit 0 00:08:58.889 08:05:08 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:08:58.889 08:05:08 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:58.889 INFO: changing configuration and checking if this can be detected... 00:08:58.889 08:05:08 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:58.889 08:05:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:59.147 08:05:08 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.147 08:05:08 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:08:59.147 08:05:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:59.147 + '[' 2 -ne 2 ']' 00:08:59.147 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:59.147 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:59.147 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:59.147 +++ basename /dev/fd/62 00:08:59.147 ++ mktemp /tmp/62.XXX 00:08:59.147 + tmp_file_1=/tmp/62.vy8 00:08:59.147 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.147 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:59.147 + tmp_file_2=/tmp/spdk_tgt_config.json.5Jz 00:08:59.147 + ret=0 00:08:59.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:59.404 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:59.661 + diff -u /tmp/62.vy8 /tmp/spdk_tgt_config.json.5Jz 00:08:59.661 + ret=1 00:08:59.661 + echo '=== Start of file: /tmp/62.vy8 ===' 00:08:59.661 + cat /tmp/62.vy8 00:08:59.661 + echo '=== End of file: /tmp/62.vy8 ===' 00:08:59.661 + echo '' 00:08:59.661 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5Jz ===' 00:08:59.661 + cat /tmp/spdk_tgt_config.json.5Jz 00:08:59.661 + echo '=== End of file: /tmp/spdk_tgt_config.json.5Jz ===' 00:08:59.661 + echo '' 00:08:59.661 + rm /tmp/62.vy8 /tmp/spdk_tgt_config.json.5Jz 00:08:59.661 + exit 1 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:08:59.661 INFO: configuration change detected. 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@321 -- # [[ -n 3989931 ]] 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@197 -- # uname -s 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:59.661 08:05:09 json_config -- json_config/json_config.sh@327 -- # killprocess 3989931 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@948 -- # '[' -z 3989931 ']' 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@952 -- # kill -0 3989931 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@953 -- # uname 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3989931 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3989931' 00:08:59.661 killing process with pid 3989931 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@967 -- # kill 3989931 00:08:59.661 08:05:09 json_config -- common/autotest_common.sh@972 -- # wait 3989931 00:09:01.599 08:05:10 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:01.599 08:05:10 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:01.599 08:05:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.599 08:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 08:05:10 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:01.599 08:05:10 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:01.599 INFO: Success 00:09:01.599 00:09:01.599 real 0m15.952s 00:09:01.599 user 0m17.662s 00:09:01.599 sys 0m2.061s 00:09:01.599 08:05:10 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.599 08:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 ************************************ 00:09:01.599 END TEST json_config 00:09:01.599 ************************************ 00:09:01.599 08:05:10 -- common/autotest_common.sh@1142 -- # return 0 00:09:01.599 08:05:10 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:01.599 08:05:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.599 08:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.599 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 ************************************ 00:09:01.599 START TEST json_config_extra_key 00:09:01.599 ************************************ 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.599 08:05:10 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.599 08:05:10 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.599 08:05:10 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.599 08:05:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.599 08:05:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.599 08:05:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.599 08:05:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:01.599 08:05:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.599 08:05:10 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:01.599 INFO: launching applications... 00:09:01.599 08:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3990838 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:01.599 Waiting for target to run... 00:09:01.599 08:05:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3990838 /var/tmp/spdk_tgt.sock 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3990838 ']' 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:01.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.599 08:05:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:01.599 [2024-07-21 08:05:10.901205] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:01.599 [2024-07-21 08:05:10.901303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990838 ] 00:09:01.599 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.855 [2024-07-21 08:05:11.242010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.855 [2024-07-21 08:05:11.305605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.417 08:05:11 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.417 08:05:11 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:02.417 00:09:02.417 08:05:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:02.417 INFO: shutting down applications... 00:09:02.417 08:05:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3990838 ]] 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3990838 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3990838 00:09:02.417 08:05:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3990838 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:02.984 08:05:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:02.984 SPDK target shutdown done 00:09:02.984 08:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:02.984 Success 00:09:02.984 00:09:02.984 real 0m1.539s 00:09:02.984 user 0m1.484s 00:09:02.984 sys 0m0.437s 00:09:02.984 08:05:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.984 08:05:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:02.984 ************************************ 00:09:02.984 END TEST json_config_extra_key 00:09:02.984 ************************************ 00:09:02.984 08:05:12 -- common/autotest_common.sh@1142 -- # return 0 00:09:02.984 08:05:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.984 08:05:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.984 08:05:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.984 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:02.984 ************************************ 00:09:02.984 START TEST alias_rpc 00:09:02.984 ************************************ 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:02.984 * Looking for test storage... 00:09:02.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:02.984 08:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:02.984 08:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3991125 00:09:02.984 08:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:02.984 08:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3991125 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3991125 ']' 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.984 08:05:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.984 [2024-07-21 08:05:12.495358] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:02.984 [2024-07-21 08:05:12.495451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991125 ] 00:09:02.984 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.984 [2024-07-21 08:05:12.551968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.242 [2024-07-21 08:05:12.636855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.499 08:05:12 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.499 08:05:12 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:03.499 08:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:03.755 08:05:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3991125 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3991125 ']' 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3991125 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3991125 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3991125' 00:09:03.755 killing process with pid 3991125 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@967 -- # kill 3991125 00:09:03.755 08:05:13 alias_rpc -- common/autotest_common.sh@972 -- # wait 3991125 00:09:04.011 00:09:04.011 real 0m1.197s 00:09:04.011 user 0m1.284s 00:09:04.011 sys 0m0.417s 00:09:04.011 08:05:13 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.011 08:05:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.011 ************************************ 00:09:04.011 END TEST alias_rpc 00:09:04.011 ************************************ 00:09:04.011 08:05:13 -- common/autotest_common.sh@1142 -- # return 0 00:09:04.011 08:05:13 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:04.011 08:05:13 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:04.011 08:05:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.011 08:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.011 08:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:04.011 ************************************ 00:09:04.011 START TEST spdkcli_tcp 00:09:04.011 ************************************ 00:09:04.011 08:05:13 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:04.268 * Looking for test storage... 00:09:04.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3991329 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:04.268 08:05:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3991329 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3991329 ']' 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.268 08:05:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.268 [2024-07-21 08:05:13.736354] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:04.268 [2024-07-21 08:05:13.736436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991329 ] 00:09:04.268 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.268 [2024-07-21 08:05:13.792373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.268 [2024-07-21 08:05:13.877265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.268 [2024-07-21 08:05:13.877270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.525 08:05:14 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.525 08:05:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:09:04.525 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3991345 00:09:04.525 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:04.525 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:04.783 [ 00:09:04.783 "bdev_malloc_delete", 00:09:04.783 "bdev_malloc_create", 00:09:04.783 "bdev_null_resize", 00:09:04.783 "bdev_null_delete", 00:09:04.783 "bdev_null_create", 00:09:04.783 "bdev_nvme_cuse_unregister", 00:09:04.783 "bdev_nvme_cuse_register", 00:09:04.783 "bdev_opal_new_user", 00:09:04.783 "bdev_opal_set_lock_state", 00:09:04.783 "bdev_opal_delete", 00:09:04.783 "bdev_opal_get_info", 00:09:04.783 "bdev_opal_create", 00:09:04.783 "bdev_nvme_opal_revert", 00:09:04.783 "bdev_nvme_opal_init", 00:09:04.783 "bdev_nvme_send_cmd", 00:09:04.783 "bdev_nvme_get_path_iostat", 00:09:04.783 "bdev_nvme_get_mdns_discovery_info", 00:09:04.783 "bdev_nvme_stop_mdns_discovery", 00:09:04.783 "bdev_nvme_start_mdns_discovery", 00:09:04.783 "bdev_nvme_set_multipath_policy", 00:09:04.783 "bdev_nvme_set_preferred_path", 00:09:04.783 "bdev_nvme_get_io_paths", 00:09:04.783 "bdev_nvme_remove_error_injection", 00:09:04.783 "bdev_nvme_add_error_injection", 00:09:04.783 "bdev_nvme_get_discovery_info", 00:09:04.783 "bdev_nvme_stop_discovery", 00:09:04.783 "bdev_nvme_start_discovery", 00:09:04.783 "bdev_nvme_get_controller_health_info", 00:09:04.783 "bdev_nvme_disable_controller", 00:09:04.783 "bdev_nvme_enable_controller", 00:09:04.783 "bdev_nvme_reset_controller", 00:09:04.783 "bdev_nvme_get_transport_statistics", 00:09:04.783 "bdev_nvme_apply_firmware", 00:09:04.783 "bdev_nvme_detach_controller", 00:09:04.783 "bdev_nvme_get_controllers", 00:09:04.783 "bdev_nvme_attach_controller", 00:09:04.783 "bdev_nvme_set_hotplug", 00:09:04.783 "bdev_nvme_set_options", 00:09:04.783 "bdev_passthru_delete", 00:09:04.783 "bdev_passthru_create", 00:09:04.783 "bdev_lvol_set_parent_bdev", 00:09:04.783 "bdev_lvol_set_parent", 00:09:04.783 "bdev_lvol_check_shallow_copy", 00:09:04.783 "bdev_lvol_start_shallow_copy", 00:09:04.783 "bdev_lvol_grow_lvstore", 00:09:04.783 "bdev_lvol_get_lvols", 00:09:04.783 "bdev_lvol_get_lvstores", 00:09:04.783 "bdev_lvol_delete", 00:09:04.783 "bdev_lvol_set_read_only", 00:09:04.783 "bdev_lvol_resize", 00:09:04.783 "bdev_lvol_decouple_parent", 00:09:04.783 "bdev_lvol_inflate", 00:09:04.783 "bdev_lvol_rename", 00:09:04.783 "bdev_lvol_clone_bdev", 00:09:04.783 "bdev_lvol_clone", 00:09:04.783 "bdev_lvol_snapshot", 00:09:04.783 "bdev_lvol_create", 00:09:04.783 "bdev_lvol_delete_lvstore", 00:09:04.783 "bdev_lvol_rename_lvstore", 00:09:04.783 "bdev_lvol_create_lvstore", 00:09:04.783 "bdev_raid_set_options", 00:09:04.783 "bdev_raid_remove_base_bdev", 00:09:04.783 "bdev_raid_add_base_bdev", 00:09:04.783 "bdev_raid_delete", 00:09:04.783 "bdev_raid_create", 00:09:04.783 "bdev_raid_get_bdevs", 00:09:04.783 "bdev_error_inject_error", 00:09:04.783 "bdev_error_delete", 00:09:04.783 "bdev_error_create", 00:09:04.783 "bdev_split_delete", 00:09:04.783 "bdev_split_create", 00:09:04.783 "bdev_delay_delete", 00:09:04.783 "bdev_delay_create", 00:09:04.783 "bdev_delay_update_latency", 00:09:04.783 "bdev_zone_block_delete", 00:09:04.783 "bdev_zone_block_create", 00:09:04.783 "blobfs_create", 00:09:04.783 "blobfs_detect", 00:09:04.783 "blobfs_set_cache_size", 00:09:04.783 "bdev_aio_delete", 00:09:04.783 "bdev_aio_rescan", 00:09:04.783 "bdev_aio_create", 00:09:04.783 "bdev_ftl_set_property", 00:09:04.783 "bdev_ftl_get_properties", 00:09:04.783 "bdev_ftl_get_stats", 00:09:04.783 "bdev_ftl_unmap", 00:09:04.783 "bdev_ftl_unload", 00:09:04.783 "bdev_ftl_delete", 00:09:04.783 "bdev_ftl_load", 00:09:04.783 "bdev_ftl_create", 00:09:04.783 "bdev_virtio_attach_controller", 00:09:04.783 "bdev_virtio_scsi_get_devices", 00:09:04.783 "bdev_virtio_detach_controller", 00:09:04.783 "bdev_virtio_blk_set_hotplug", 00:09:04.783 "bdev_iscsi_delete", 00:09:04.783 "bdev_iscsi_create", 00:09:04.783 "bdev_iscsi_set_options", 00:09:04.783 "accel_error_inject_error", 00:09:04.783 "ioat_scan_accel_module", 00:09:04.783 "dsa_scan_accel_module", 00:09:04.783 "iaa_scan_accel_module", 00:09:04.783 "vfu_virtio_create_scsi_endpoint", 00:09:04.783 "vfu_virtio_scsi_remove_target", 00:09:04.783 "vfu_virtio_scsi_add_target", 00:09:04.783 "vfu_virtio_create_blk_endpoint", 00:09:04.783 "vfu_virtio_delete_endpoint", 00:09:04.783 "keyring_file_remove_key", 00:09:04.783 "keyring_file_add_key", 00:09:04.783 "keyring_linux_set_options", 00:09:04.783 "iscsi_get_histogram", 00:09:04.783 "iscsi_enable_histogram", 00:09:04.783 "iscsi_set_options", 00:09:04.783 "iscsi_get_auth_groups", 00:09:04.783 "iscsi_auth_group_remove_secret", 00:09:04.783 "iscsi_auth_group_add_secret", 00:09:04.783 "iscsi_delete_auth_group", 00:09:04.783 "iscsi_create_auth_group", 00:09:04.783 "iscsi_set_discovery_auth", 00:09:04.783 "iscsi_get_options", 00:09:04.783 "iscsi_target_node_request_logout", 00:09:04.783 "iscsi_target_node_set_redirect", 00:09:04.783 "iscsi_target_node_set_auth", 00:09:04.783 "iscsi_target_node_add_lun", 00:09:04.783 "iscsi_get_stats", 00:09:04.783 "iscsi_get_connections", 00:09:04.783 "iscsi_portal_group_set_auth", 00:09:04.783 "iscsi_start_portal_group", 00:09:04.783 "iscsi_delete_portal_group", 00:09:04.783 "iscsi_create_portal_group", 00:09:04.783 "iscsi_get_portal_groups", 00:09:04.783 "iscsi_delete_target_node", 00:09:04.783 "iscsi_target_node_remove_pg_ig_maps", 00:09:04.783 "iscsi_target_node_add_pg_ig_maps", 00:09:04.783 "iscsi_create_target_node", 00:09:04.783 "iscsi_get_target_nodes", 00:09:04.783 "iscsi_delete_initiator_group", 00:09:04.783 "iscsi_initiator_group_remove_initiators", 00:09:04.783 "iscsi_initiator_group_add_initiators", 00:09:04.783 "iscsi_create_initiator_group", 00:09:04.783 "iscsi_get_initiator_groups", 00:09:04.783 "nvmf_set_crdt", 00:09:04.783 "nvmf_set_config", 00:09:04.783 "nvmf_set_max_subsystems", 00:09:04.783 "nvmf_stop_mdns_prr", 00:09:04.783 "nvmf_publish_mdns_prr", 00:09:04.783 "nvmf_subsystem_get_listeners", 00:09:04.783 "nvmf_subsystem_get_qpairs", 00:09:04.783 "nvmf_subsystem_get_controllers", 00:09:04.783 "nvmf_get_stats", 00:09:04.783 "nvmf_get_transports", 00:09:04.783 "nvmf_create_transport", 00:09:04.783 "nvmf_get_targets", 00:09:04.783 "nvmf_delete_target", 00:09:04.783 "nvmf_create_target", 00:09:04.783 "nvmf_subsystem_allow_any_host", 00:09:04.783 "nvmf_subsystem_remove_host", 00:09:04.783 "nvmf_subsystem_add_host", 00:09:04.783 "nvmf_ns_remove_host", 00:09:04.784 "nvmf_ns_add_host", 00:09:04.784 "nvmf_subsystem_remove_ns", 00:09:04.784 "nvmf_subsystem_add_ns", 00:09:04.784 "nvmf_subsystem_listener_set_ana_state", 00:09:04.784 "nvmf_discovery_get_referrals", 00:09:04.784 "nvmf_discovery_remove_referral", 00:09:04.784 "nvmf_discovery_add_referral", 00:09:04.784 "nvmf_subsystem_remove_listener", 00:09:04.784 "nvmf_subsystem_add_listener", 00:09:04.784 "nvmf_delete_subsystem", 00:09:04.784 "nvmf_create_subsystem", 00:09:04.784 "nvmf_get_subsystems", 00:09:04.784 "env_dpdk_get_mem_stats", 00:09:04.784 "nbd_get_disks", 00:09:04.784 "nbd_stop_disk", 00:09:04.784 "nbd_start_disk", 00:09:04.784 "ublk_recover_disk", 00:09:04.784 "ublk_get_disks", 00:09:04.784 "ublk_stop_disk", 00:09:04.784 "ublk_start_disk", 00:09:04.784 "ublk_destroy_target", 00:09:04.784 "ublk_create_target", 00:09:04.784 "virtio_blk_create_transport", 00:09:04.784 "virtio_blk_get_transports", 00:09:04.784 "vhost_controller_set_coalescing", 00:09:04.784 "vhost_get_controllers", 00:09:04.784 "vhost_delete_controller", 00:09:04.784 "vhost_create_blk_controller", 00:09:04.784 "vhost_scsi_controller_remove_target", 00:09:04.784 "vhost_scsi_controller_add_target", 00:09:04.784 "vhost_start_scsi_controller", 00:09:04.784 "vhost_create_scsi_controller", 00:09:04.784 "thread_set_cpumask", 00:09:04.784 "framework_get_governor", 00:09:04.784 "framework_get_scheduler", 00:09:04.784 "framework_set_scheduler", 00:09:04.784 "framework_get_reactors", 00:09:04.784 "thread_get_io_channels", 00:09:04.784 "thread_get_pollers", 00:09:04.784 "thread_get_stats", 00:09:04.784 "framework_monitor_context_switch", 00:09:04.784 "spdk_kill_instance", 00:09:04.784 "log_enable_timestamps", 00:09:04.784 "log_get_flags", 00:09:04.784 "log_clear_flag", 00:09:04.784 "log_set_flag", 00:09:04.784 "log_get_level", 00:09:04.784 "log_set_level", 00:09:04.784 "log_get_print_level", 00:09:04.784 "log_set_print_level", 00:09:04.784 "framework_enable_cpumask_locks", 00:09:04.784 "framework_disable_cpumask_locks", 00:09:04.784 "framework_wait_init", 00:09:04.784 "framework_start_init", 00:09:04.784 "scsi_get_devices", 00:09:04.784 "bdev_get_histogram", 00:09:04.784 "bdev_enable_histogram", 00:09:04.784 "bdev_set_qos_limit", 00:09:04.784 "bdev_set_qd_sampling_period", 00:09:04.784 "bdev_get_bdevs", 00:09:04.784 "bdev_reset_iostat", 00:09:04.784 "bdev_get_iostat", 00:09:04.784 "bdev_examine", 00:09:04.784 "bdev_wait_for_examine", 00:09:04.784 "bdev_set_options", 00:09:04.784 "notify_get_notifications", 00:09:04.784 "notify_get_types", 00:09:04.784 "accel_get_stats", 00:09:04.784 "accel_set_options", 00:09:04.784 "accel_set_driver", 00:09:04.784 "accel_crypto_key_destroy", 00:09:04.784 "accel_crypto_keys_get", 00:09:04.784 "accel_crypto_key_create", 00:09:04.784 "accel_assign_opc", 00:09:04.784 "accel_get_module_info", 00:09:04.784 "accel_get_opc_assignments", 00:09:04.784 "vmd_rescan", 00:09:04.784 "vmd_remove_device", 00:09:04.784 "vmd_enable", 00:09:04.784 "sock_get_default_impl", 00:09:04.784 "sock_set_default_impl", 00:09:04.784 "sock_impl_set_options", 00:09:04.784 "sock_impl_get_options", 00:09:04.784 "iobuf_get_stats", 00:09:04.784 "iobuf_set_options", 00:09:04.784 "keyring_get_keys", 00:09:04.784 "framework_get_pci_devices", 00:09:04.784 "framework_get_config", 00:09:04.784 "framework_get_subsystems", 00:09:04.784 "vfu_tgt_set_base_path", 00:09:04.784 "trace_get_info", 00:09:04.784 "trace_get_tpoint_group_mask", 00:09:04.784 "trace_disable_tpoint_group", 00:09:04.784 "trace_enable_tpoint_group", 00:09:04.784 "trace_clear_tpoint_mask", 00:09:04.784 "trace_set_tpoint_mask", 00:09:04.784 "spdk_get_version", 00:09:04.784 "rpc_get_methods" 00:09:04.784 ] 00:09:04.784 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:04.784 08:05:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3991329 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3991329 ']' 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3991329 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:04.784 08:05:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3991329 00:09:05.040 08:05:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.040 08:05:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.040 08:05:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3991329' 00:09:05.040 killing process with pid 3991329 00:09:05.040 08:05:14 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3991329 00:09:05.040 08:05:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3991329 00:09:05.298 00:09:05.298 real 0m1.210s 00:09:05.298 user 0m2.159s 00:09:05.298 sys 0m0.437s 00:09:05.298 08:05:14 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.298 08:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.298 ************************************ 00:09:05.298 END TEST spdkcli_tcp 00:09:05.298 ************************************ 00:09:05.298 08:05:14 -- common/autotest_common.sh@1142 -- # return 0 00:09:05.298 08:05:14 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:05.298 08:05:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.298 08:05:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.298 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:09:05.298 ************************************ 00:09:05.298 START TEST dpdk_mem_utility 00:09:05.298 ************************************ 00:09:05.298 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:05.555 * Looking for test storage... 00:09:05.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:05.555 08:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:05.555 08:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3991530 00:09:05.555 08:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:05.555 08:05:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3991530 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3991530 ']' 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.555 08:05:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:05.555 [2024-07-21 08:05:14.992283] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:05.555 [2024-07-21 08:05:14.992379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991530 ] 00:09:05.555 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.555 [2024-07-21 08:05:15.049833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.555 [2024-07-21 08:05:15.138882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.811 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.811 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:05.811 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:05.811 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:05.811 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.811 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:05.811 { 00:09:05.811 "filename": "/tmp/spdk_mem_dump.txt" 00:09:05.811 } 00:09:05.811 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.811 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:06.068 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:06.068 1 heaps totaling size 814.000000 MiB 00:09:06.068 size: 814.000000 MiB heap id: 0 00:09:06.068 end heaps---------- 00:09:06.068 8 mempools totaling size 598.116089 MiB 00:09:06.068 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:06.068 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:06.068 size: 84.521057 MiB name: bdev_io_3991530 00:09:06.068 size: 51.011292 MiB name: evtpool_3991530 00:09:06.068 size: 50.003479 MiB name: msgpool_3991530 00:09:06.068 size: 21.763794 MiB name: PDU_Pool 00:09:06.068 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:06.068 size: 0.026123 MiB name: Session_Pool 00:09:06.068 end mempools------- 00:09:06.068 6 memzones totaling size 4.142822 MiB 00:09:06.068 size: 1.000366 MiB name: RG_ring_0_3991530 00:09:06.068 size: 1.000366 MiB name: RG_ring_1_3991530 00:09:06.068 size: 1.000366 MiB name: RG_ring_4_3991530 00:09:06.068 size: 1.000366 MiB name: RG_ring_5_3991530 00:09:06.068 size: 0.125366 MiB name: RG_ring_2_3991530 00:09:06.068 size: 0.015991 MiB name: RG_ring_3_3991530 00:09:06.068 end memzones------- 00:09:06.068 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:06.068 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:09:06.068 list of free elements. size: 12.519348 MiB 00:09:06.068 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:06.068 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:06.068 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:06.068 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:06.068 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:06.068 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:06.068 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:06.068 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:06.068 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:06.069 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:09:06.069 element at address: 0x20000b200000 with size: 0.490723 MiB 00:09:06.069 element at address: 0x200000800000 with size: 0.487793 MiB 00:09:06.069 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:06.069 element at address: 0x200027e00000 with size: 0.410034 MiB 00:09:06.069 element at address: 0x200003a00000 with size: 0.355530 MiB 00:09:06.069 list of standard malloc elements. size: 199.218079 MiB 00:09:06.069 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:06.069 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:06.069 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:06.069 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:06.069 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:06.069 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:06.069 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:06.069 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:06.069 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:06.069 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:06.069 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:06.069 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200027e69040 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:06.069 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:06.069 list of memzone associated elements. size: 602.262573 MiB 00:09:06.069 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:06.069 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:06.069 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:06.069 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:06.069 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:06.069 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3991530_0 00:09:06.069 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:06.069 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3991530_0 00:09:06.069 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:06.069 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3991530_0 00:09:06.069 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:06.069 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:06.069 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:06.069 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:06.069 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:06.069 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3991530 00:09:06.069 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:06.069 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3991530 00:09:06.069 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:06.069 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3991530 00:09:06.069 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:06.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:06.069 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:06.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:06.069 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:06.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:06.069 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:06.069 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:06.069 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:06.069 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3991530 00:09:06.069 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:06.069 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3991530 00:09:06.069 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:06.069 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3991530 00:09:06.069 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:06.069 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3991530 00:09:06.069 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:06.069 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3991530 00:09:06.069 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:06.069 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:06.069 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:06.069 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:06.069 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:06.069 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:06.069 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:06.069 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3991530 00:09:06.069 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:06.069 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:06.069 element at address: 0x200027e69100 with size: 0.023743 MiB 00:09:06.069 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:06.069 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:06.069 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3991530 00:09:06.069 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:09:06.069 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:06.069 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:06.069 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3991530 00:09:06.069 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:06.069 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3991530 00:09:06.069 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:09:06.069 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:06.069 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:06.069 08:05:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3991530 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3991530 ']' 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3991530 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3991530 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3991530' 00:09:06.069 killing process with pid 3991530 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3991530 00:09:06.069 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3991530 00:09:06.327 00:09:06.327 real 0m1.035s 00:09:06.327 user 0m1.002s 00:09:06.327 sys 0m0.411s 00:09:06.327 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.327 08:05:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:06.327 ************************************ 00:09:06.327 END TEST dpdk_mem_utility 00:09:06.327 ************************************ 00:09:06.327 08:05:15 -- common/autotest_common.sh@1142 -- # return 0 00:09:06.327 08:05:15 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:06.327 08:05:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.327 08:05:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.327 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 ************************************ 00:09:06.584 START TEST event 00:09:06.584 ************************************ 00:09:06.584 08:05:15 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:06.584 * Looking for test storage... 00:09:06.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:06.584 08:05:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:06.584 08:05:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:06.584 08:05:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:06.584 08:05:16 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:06.584 08:05:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.584 08:05:16 event -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 ************************************ 00:09:06.584 START TEST event_perf 00:09:06.584 ************************************ 00:09:06.584 08:05:16 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:06.584 Running I/O for 1 seconds...[2024-07-21 08:05:16.069715] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:06.584 [2024-07-21 08:05:16.069779] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991726 ] 00:09:06.584 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.584 [2024-07-21 08:05:16.131450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.842 [2024-07-21 08:05:16.225392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.842 [2024-07-21 08:05:16.225458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.842 [2024-07-21 08:05:16.225551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.842 [2024-07-21 08:05:16.225553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.771 Running I/O for 1 seconds... 00:09:07.771 lcore 0: 227871 00:09:07.771 lcore 1: 227869 00:09:07.771 lcore 2: 227869 00:09:07.771 lcore 3: 227869 00:09:07.771 done. 00:09:07.771 00:09:07.771 real 0m1.248s 00:09:07.771 user 0m4.157s 00:09:07.771 sys 0m0.086s 00:09:07.771 08:05:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.771 08:05:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:07.771 ************************************ 00:09:07.771 END TEST event_perf 00:09:07.771 ************************************ 00:09:07.771 08:05:17 event -- common/autotest_common.sh@1142 -- # return 0 00:09:07.771 08:05:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:07.771 08:05:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:07.771 08:05:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.771 08:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:09:07.771 ************************************ 00:09:07.771 START TEST event_reactor 00:09:07.771 ************************************ 00:09:07.771 08:05:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:07.771 [2024-07-21 08:05:17.361773] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:07.771 [2024-07-21 08:05:17.361830] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991885 ] 00:09:07.771 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.028 [2024-07-21 08:05:17.422513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.028 [2024-07-21 08:05:17.515638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.398 test_start 00:09:09.398 oneshot 00:09:09.398 tick 100 00:09:09.398 tick 100 00:09:09.398 tick 250 00:09:09.398 tick 100 00:09:09.398 tick 100 00:09:09.398 tick 100 00:09:09.398 tick 250 00:09:09.398 tick 500 00:09:09.398 tick 100 00:09:09.398 tick 100 00:09:09.398 tick 250 00:09:09.398 tick 100 00:09:09.398 tick 100 00:09:09.398 test_end 00:09:09.398 00:09:09.398 real 0m1.247s 00:09:09.398 user 0m1.161s 00:09:09.398 sys 0m0.082s 00:09:09.398 08:05:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.398 08:05:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:09.398 ************************************ 00:09:09.398 END TEST event_reactor 00:09:09.398 ************************************ 00:09:09.398 08:05:18 event -- common/autotest_common.sh@1142 -- # return 0 00:09:09.398 08:05:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:09.398 08:05:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:09.398 08:05:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.398 08:05:18 event -- common/autotest_common.sh@10 -- # set +x 00:09:09.398 ************************************ 00:09:09.398 START TEST event_reactor_perf 00:09:09.398 ************************************ 00:09:09.398 08:05:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:09.398 [2024-07-21 08:05:18.661004] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:09.398 [2024-07-21 08:05:18.661071] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992038 ] 00:09:09.398 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.398 [2024-07-21 08:05:18.722826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.398 [2024-07-21 08:05:18.815596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.329 test_start 00:09:10.329 test_end 00:09:10.329 Performance: 358150 events per second 00:09:10.329 00:09:10.329 real 0m1.250s 00:09:10.329 user 0m1.162s 00:09:10.329 sys 0m0.084s 00:09:10.329 08:05:19 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.329 08:05:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 ************************************ 00:09:10.329 END TEST event_reactor_perf 00:09:10.329 ************************************ 00:09:10.329 08:05:19 event -- common/autotest_common.sh@1142 -- # return 0 00:09:10.329 08:05:19 event -- event/event.sh@49 -- # uname -s 00:09:10.329 08:05:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:10.329 08:05:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:10.329 08:05:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.329 08:05:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.329 08:05:19 event -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 ************************************ 00:09:10.329 START TEST event_scheduler 00:09:10.329 ************************************ 00:09:10.329 08:05:19 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:10.587 * Looking for test storage... 00:09:10.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:10.587 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:10.588 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3992220 00:09:10.588 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:10.588 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.588 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3992220 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3992220 ']' 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.588 08:05:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:10.588 [2024-07-21 08:05:20.045143] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:10.588 [2024-07-21 08:05:20.045233] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992220 ] 00:09:10.588 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.588 [2024-07-21 08:05:20.103982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.588 [2024-07-21 08:05:20.190274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.588 [2024-07-21 08:05:20.190337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.588 [2024-07-21 08:05:20.190582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.588 [2024-07-21 08:05:20.190585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:09:10.846 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 [2024-07-21 08:05:20.263376] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:10.846 [2024-07-21 08:05:20.263406] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:10.846 [2024-07-21 08:05:20.263438] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:10.846 [2024-07-21 08:05:20.263449] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:10.846 [2024-07-21 08:05:20.263459] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 [2024-07-21 08:05:20.353499] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 ************************************ 00:09:10.846 START TEST scheduler_create_thread 00:09:10.846 ************************************ 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 2 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 3 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 4 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 5 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:10.846 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 6 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 7 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 8 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 9 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 10 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:10.847 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.104 08:05:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:11.668 08:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.668 08:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:11.668 08:05:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:11.668 08:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.668 08:05:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:12.598 08:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.598 00:09:12.598 real 0m1.754s 00:09:12.598 user 0m0.015s 00:09:12.598 sys 0m0.004s 00:09:12.598 08:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.598 08:05:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:12.598 ************************************ 00:09:12.598 END TEST scheduler_create_thread 00:09:12.598 ************************************ 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:09:12.598 08:05:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:12.598 08:05:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3992220 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3992220 ']' 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3992220 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3992220 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3992220' 00:09:12.598 killing process with pid 3992220 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3992220 00:09:12.598 08:05:22 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3992220 00:09:13.187 [2024-07-21 08:05:22.613266] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:13.445 00:09:13.445 real 0m2.874s 00:09:13.445 user 0m3.749s 00:09:13.445 sys 0m0.339s 00:09:13.445 08:05:22 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.445 08:05:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:13.445 ************************************ 00:09:13.445 END TEST event_scheduler 00:09:13.445 ************************************ 00:09:13.445 08:05:22 event -- common/autotest_common.sh@1142 -- # return 0 00:09:13.445 08:05:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:13.445 08:05:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:13.445 08:05:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:13.445 08:05:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.445 08:05:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.445 ************************************ 00:09:13.445 START TEST app_repeat 00:09:13.445 ************************************ 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3992665 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3992665' 00:09:13.445 Process app_repeat pid: 3992665 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:13.445 spdk_app_start Round 0 00:09:13.445 08:05:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3992665 /var/tmp/spdk-nbd.sock 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3992665 ']' 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:13.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.445 08:05:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:13.445 [2024-07-21 08:05:22.890176] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:13.445 [2024-07-21 08:05:22.890231] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992665 ] 00:09:13.445 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.445 [2024-07-21 08:05:22.954922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.445 [2024-07-21 08:05:23.055639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.445 [2024-07-21 08:05:23.055678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.703 08:05:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.703 08:05:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:13.703 08:05:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:13.960 Malloc0 00:09:13.960 08:05:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.217 Malloc1 00:09:14.217 08:05:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.217 08:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:14.475 /dev/nbd0 00:09:14.475 08:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.475 08:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.475 1+0 records in 00:09:14.475 1+0 records out 00:09:14.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158087 s, 25.9 MB/s 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.475 08:05:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:14.475 08:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.475 08:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.475 08:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:14.732 /dev/nbd1 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.732 1+0 records in 00:09:14.732 1+0 records out 00:09:14.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207289 s, 19.8 MB/s 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.732 08:05:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.732 08:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:14.990 { 00:09:14.990 "nbd_device": "/dev/nbd0", 00:09:14.990 "bdev_name": "Malloc0" 00:09:14.990 }, 00:09:14.990 { 00:09:14.990 "nbd_device": "/dev/nbd1", 00:09:14.990 "bdev_name": "Malloc1" 00:09:14.990 } 00:09:14.990 ]' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:14.990 { 00:09:14.990 "nbd_device": "/dev/nbd0", 00:09:14.990 "bdev_name": "Malloc0" 00:09:14.990 }, 00:09:14.990 { 00:09:14.990 "nbd_device": "/dev/nbd1", 00:09:14.990 "bdev_name": "Malloc1" 00:09:14.990 } 00:09:14.990 ]' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:14.990 /dev/nbd1' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:14.990 /dev/nbd1' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:14.990 256+0 records in 00:09:14.990 256+0 records out 00:09:14.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501968 s, 209 MB/s 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:14.990 256+0 records in 00:09:14.990 256+0 records out 00:09:14.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022686 s, 46.2 MB/s 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:14.990 256+0 records in 00:09:14.990 256+0 records out 00:09:14.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243648 s, 43.0 MB/s 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.990 08:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.248 08:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.504 08:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.504 08:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.761 08:05:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.762 08:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.762 08:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.762 08:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.762 08:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.018 08:05:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.018 08:05:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.275 08:05:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:16.532 [2024-07-21 08:05:25.906836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.532 [2024-07-21 08:05:25.997913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.532 [2024-07-21 08:05:25.997913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.532 [2024-07-21 08:05:26.056794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:16.532 [2024-07-21 08:05:26.056859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.079 08:05:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:19.079 08:05:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:19.079 spdk_app_start Round 1 00:09:19.079 08:05:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3992665 /var/tmp/spdk-nbd.sock 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3992665 ']' 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.079 08:05:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:19.336 08:05:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.336 08:05:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:19.336 08:05:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.593 Malloc0 00:09:19.593 08:05:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.850 Malloc1 00:09:19.850 08:05:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.850 08:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.851 08:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:20.108 /dev/nbd0 00:09:20.108 08:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.108 08:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.108 1+0 records in 00:09:20.108 1+0 records out 00:09:20.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196336 s, 20.9 MB/s 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:20.108 08:05:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:20.108 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.108 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.108 08:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:20.366 /dev/nbd1 00:09:20.366 08:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.366 08:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.366 1+0 records in 00:09:20.366 1+0 records out 00:09:20.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187597 s, 21.8 MB/s 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:20.366 08:05:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.367 08:05:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:20.367 08:05:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:20.367 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.367 08:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.367 08:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.367 08:05:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.367 08:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.623 08:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:20.624 { 00:09:20.624 "nbd_device": "/dev/nbd0", 00:09:20.624 "bdev_name": "Malloc0" 00:09:20.624 }, 00:09:20.624 { 00:09:20.624 "nbd_device": "/dev/nbd1", 00:09:20.624 "bdev_name": "Malloc1" 00:09:20.624 } 00:09:20.624 ]' 00:09:20.624 08:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:20.624 { 00:09:20.624 "nbd_device": "/dev/nbd0", 00:09:20.624 "bdev_name": "Malloc0" 00:09:20.624 }, 00:09:20.624 { 00:09:20.624 "nbd_device": "/dev/nbd1", 00:09:20.624 "bdev_name": "Malloc1" 00:09:20.624 } 00:09:20.624 ]' 00:09:20.624 08:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:20.881 /dev/nbd1' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:20.881 /dev/nbd1' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.881 256+0 records in 00:09:20.881 256+0 records out 00:09:20.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525434 s, 200 MB/s 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.881 256+0 records in 00:09:20.881 256+0 records out 00:09:20.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240914 s, 43.5 MB/s 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.881 256+0 records in 00:09:20.881 256+0 records out 00:09:20.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253436 s, 41.4 MB/s 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.881 08:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.138 08:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.394 08:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.395 08:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.651 08:05:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.651 08:05:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:21.908 08:05:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:22.165 [2024-07-21 08:05:31.676540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.165 [2024-07-21 08:05:31.766623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.165 [2024-07-21 08:05:31.766624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.421 [2024-07-21 08:05:31.824459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:22.421 [2024-07-21 08:05:31.824529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.942 08:05:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:24.942 08:05:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:24.942 spdk_app_start Round 2 00:09:24.942 08:05:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3992665 /var/tmp/spdk-nbd.sock 00:09:24.942 08:05:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3992665 ']' 00:09:24.943 08:05:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.943 08:05:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.943 08:05:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.943 08:05:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.943 08:05:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 08:05:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.199 08:05:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:25.199 08:05:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.457 Malloc0 00:09:25.457 08:05:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.714 Malloc1 00:09:25.714 08:05:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.714 08:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:25.972 /dev/nbd0 00:09:25.972 08:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.972 08:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.972 1+0 records in 00:09:25.972 1+0 records out 00:09:25.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197791 s, 20.7 MB/s 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:25.972 08:05:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:25.972 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.972 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.972 08:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:26.237 /dev/nbd1 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:26.237 1+0 records in 00:09:26.237 1+0 records out 00:09:26.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191205 s, 21.4 MB/s 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:26.237 08:05:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.237 08:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:26.530 { 00:09:26.530 "nbd_device": "/dev/nbd0", 00:09:26.530 "bdev_name": "Malloc0" 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "nbd_device": "/dev/nbd1", 00:09:26.530 "bdev_name": "Malloc1" 00:09:26.530 } 00:09:26.530 ]' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:26.530 { 00:09:26.530 "nbd_device": "/dev/nbd0", 00:09:26.530 "bdev_name": "Malloc0" 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "nbd_device": "/dev/nbd1", 00:09:26.530 "bdev_name": "Malloc1" 00:09:26.530 } 00:09:26.530 ]' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:26.530 /dev/nbd1' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:26.530 /dev/nbd1' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:26.530 256+0 records in 00:09:26.530 256+0 records out 00:09:26.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435013 s, 241 MB/s 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:26.530 256+0 records in 00:09:26.530 256+0 records out 00:09:26.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222405 s, 47.1 MB/s 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:26.530 256+0 records in 00:09:26.530 256+0 records out 00:09:26.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02326 s, 45.1 MB/s 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.530 08:05:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.531 08:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.788 08:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.046 08:05:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.302 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:27.559 08:05:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:27.559 08:05:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:27.815 08:05:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:28.073 [2024-07-21 08:05:37.449034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.073 [2024-07-21 08:05:37.539076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.073 [2024-07-21 08:05:37.539078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.073 [2024-07-21 08:05:37.601704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:28.073 [2024-07-21 08:05:37.601780] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:31.350 08:05:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3992665 /var/tmp/spdk-nbd.sock 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3992665 ']' 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:31.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:31.350 08:05:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3992665 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3992665 ']' 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3992665 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3992665 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3992665' 00:09:31.350 killing process with pid 3992665 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3992665 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3992665 00:09:31.350 spdk_app_start is called in Round 0. 00:09:31.350 Shutdown signal received, stop current app iteration 00:09:31.350 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 reinitialization... 00:09:31.350 spdk_app_start is called in Round 1. 00:09:31.350 Shutdown signal received, stop current app iteration 00:09:31.350 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 reinitialization... 00:09:31.350 spdk_app_start is called in Round 2. 00:09:31.350 Shutdown signal received, stop current app iteration 00:09:31.350 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 reinitialization... 00:09:31.350 spdk_app_start is called in Round 3. 00:09:31.350 Shutdown signal received, stop current app iteration 00:09:31.350 08:05:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:31.350 08:05:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:31.350 00:09:31.350 real 0m17.837s 00:09:31.350 user 0m38.889s 00:09:31.350 sys 0m3.172s 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.350 08:05:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 ************************************ 00:09:31.350 END TEST app_repeat 00:09:31.350 ************************************ 00:09:31.350 08:05:40 event -- common/autotest_common.sh@1142 -- # return 0 00:09:31.350 08:05:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:31.350 08:05:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:31.350 08:05:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.350 08:05:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.350 08:05:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 ************************************ 00:09:31.350 START TEST cpu_locks 00:09:31.350 ************************************ 00:09:31.350 08:05:40 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:31.350 * Looking for test storage... 00:09:31.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:31.350 08:05:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:31.350 08:05:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:31.350 08:05:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:31.350 08:05:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:31.350 08:05:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.350 08:05:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.350 08:05:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 ************************************ 00:09:31.350 START TEST default_locks 00:09:31.350 ************************************ 00:09:31.350 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:09:31.350 08:05:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3995018 00:09:31.350 08:05:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3995018 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3995018 ']' 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.351 08:05:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.351 [2024-07-21 08:05:40.877804] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:31.351 [2024-07-21 08:05:40.877894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995018 ] 00:09:31.351 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.351 [2024-07-21 08:05:40.933139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.608 [2024-07-21 08:05:41.018279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.864 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.864 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:09:31.864 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3995018 00:09:31.864 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3995018 00:09:31.864 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.131 lslocks: write error 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3995018 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3995018 ']' 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3995018 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995018 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995018' 00:09:32.131 killing process with pid 3995018 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3995018 00:09:32.131 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3995018 00:09:32.388 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3995018 00:09:32.388 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3995018 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3995018 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3995018 ']' 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3995018) - No such process 00:09:32.389 ERROR: process (pid: 3995018) is no longer running 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:32.389 00:09:32.389 real 0m1.146s 00:09:32.389 user 0m1.089s 00:09:32.389 sys 0m0.498s 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.389 08:05:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.389 ************************************ 00:09:32.389 END TEST default_locks 00:09:32.389 ************************************ 00:09:32.389 08:05:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:32.389 08:05:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:32.389 08:05:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.389 08:05:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.389 08:05:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.646 ************************************ 00:09:32.646 START TEST default_locks_via_rpc 00:09:32.646 ************************************ 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3995180 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3995180 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3995180 ']' 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.646 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.646 [2024-07-21 08:05:42.074848] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:32.646 [2024-07-21 08:05:42.074942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995180 ] 00:09:32.646 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.646 [2024-07-21 08:05:42.131532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.646 [2024-07-21 08:05:42.219960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.902 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3995180 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3995180 00:09:32.903 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3995180 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3995180 ']' 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3995180 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995180 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995180' 00:09:33.467 killing process with pid 3995180 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3995180 00:09:33.467 08:05:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3995180 00:09:33.723 00:09:33.723 real 0m1.217s 00:09:33.723 user 0m1.145s 00:09:33.723 sys 0m0.532s 00:09:33.723 08:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.723 08:05:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.723 ************************************ 00:09:33.723 END TEST default_locks_via_rpc 00:09:33.723 ************************************ 00:09:33.723 08:05:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:33.723 08:05:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:33.723 08:05:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.723 08:05:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.723 08:05:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:33.723 ************************************ 00:09:33.723 START TEST non_locking_app_on_locked_coremask 00:09:33.723 ************************************ 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3995342 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3995342 /var/tmp/spdk.sock 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3995342 ']' 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.723 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.723 [2024-07-21 08:05:43.340807] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:33.723 [2024-07-21 08:05:43.340904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995342 ] 00:09:33.981 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.981 [2024-07-21 08:05:43.399019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.981 [2024-07-21 08:05:43.487147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3995351 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3995351 /var/tmp/spdk2.sock 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3995351 ']' 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.238 08:05:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.238 [2024-07-21 08:05:43.794661] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:34.238 [2024-07-21 08:05:43.794740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995351 ] 00:09:34.238 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.494 [2024-07-21 08:05:43.886432] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.494 [2024-07-21 08:05:43.886465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.494 [2024-07-21 08:05:44.066252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.423 08:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.423 08:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:35.423 08:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3995342 00:09:35.423 08:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3995342 00:09:35.423 08:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.679 lslocks: write error 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3995342 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3995342 ']' 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3995342 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995342 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995342' 00:09:35.679 killing process with pid 3995342 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3995342 00:09:35.679 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3995342 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3995351 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3995351 ']' 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3995351 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995351 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995351' 00:09:36.609 killing process with pid 3995351 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3995351 00:09:36.609 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3995351 00:09:36.867 00:09:36.867 real 0m3.100s 00:09:36.867 user 0m3.220s 00:09:36.867 sys 0m1.030s 00:09:36.867 08:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.867 08:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 ************************************ 00:09:36.867 END TEST non_locking_app_on_locked_coremask 00:09:36.867 ************************************ 00:09:36.867 08:05:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:36.867 08:05:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:36.867 08:05:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.867 08:05:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.867 08:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 ************************************ 00:09:36.867 START TEST locking_app_on_unlocked_coremask 00:09:36.867 ************************************ 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3995780 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3995780 /var/tmp/spdk.sock 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3995780 ']' 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.867 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.867 [2024-07-21 08:05:46.476229] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:36.867 [2024-07-21 08:05:46.476303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995780 ] 00:09:37.125 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.125 [2024-07-21 08:05:46.536485] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:37.125 [2024-07-21 08:05:46.536518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.125 [2024-07-21 08:05:46.626731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3995785 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3995785 /var/tmp/spdk2.sock 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3995785 ']' 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.382 08:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.382 [2024-07-21 08:05:46.936534] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:37.382 [2024-07-21 08:05:46.936628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995785 ] 00:09:37.382 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.638 [2024-07-21 08:05:47.026337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.638 [2024-07-21 08:05:47.210108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.569 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.569 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:38.569 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3995785 00:09:38.569 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3995785 00:09:38.569 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.131 lslocks: write error 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3995780 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3995780 ']' 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3995780 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995780 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995780' 00:09:39.131 killing process with pid 3995780 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3995780 00:09:39.131 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3995780 00:09:40.071 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3995785 00:09:40.071 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3995785 ']' 00:09:40.071 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3995785 00:09:40.071 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3995785 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3995785' 00:09:40.072 killing process with pid 3995785 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3995785 00:09:40.072 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3995785 00:09:40.345 00:09:40.345 real 0m3.469s 00:09:40.345 user 0m3.583s 00:09:40.345 sys 0m1.181s 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.345 ************************************ 00:09:40.345 END TEST locking_app_on_unlocked_coremask 00:09:40.345 ************************************ 00:09:40.345 08:05:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:40.345 08:05:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:40.345 08:05:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:40.345 08:05:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.345 08:05:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:40.345 ************************************ 00:09:40.345 START TEST locking_app_on_locked_coremask 00:09:40.345 ************************************ 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3996217 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3996217 /var/tmp/spdk.sock 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3996217 ']' 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.345 08:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.602 [2024-07-21 08:05:50.001663] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:40.602 [2024-07-21 08:05:50.001751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996217 ] 00:09:40.602 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.602 [2024-07-21 08:05:50.078784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.602 [2024-07-21 08:05:50.168400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3996220 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3996220 /var/tmp/spdk2.sock 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3996220 /var/tmp/spdk2.sock 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3996220 /var/tmp/spdk2.sock 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3996220 ']' 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.860 08:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.860 [2024-07-21 08:05:50.479647] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:40.860 [2024-07-21 08:05:50.479748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996220 ] 00:09:41.118 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.118 [2024-07-21 08:05:50.576159] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3996217 has claimed it. 00:09:41.118 [2024-07-21 08:05:50.576222] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:41.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3996220) - No such process 00:09:41.682 ERROR: process (pid: 3996220) is no longer running 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3996217 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3996217 00:09:41.682 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.245 lslocks: write error 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3996217 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3996217 ']' 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3996217 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996217 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996217' 00:09:42.245 killing process with pid 3996217 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3996217 00:09:42.245 08:05:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3996217 00:09:42.501 00:09:42.501 real 0m2.063s 00:09:42.501 user 0m2.204s 00:09:42.501 sys 0m0.672s 00:09:42.501 08:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.501 08:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.501 ************************************ 00:09:42.501 END TEST locking_app_on_locked_coremask 00:09:42.501 ************************************ 00:09:42.501 08:05:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:42.501 08:05:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:42.501 08:05:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:42.501 08:05:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.501 08:05:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:42.501 ************************************ 00:09:42.501 START TEST locking_overlapped_coremask 00:09:42.501 ************************************ 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3996511 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3996511 /var/tmp/spdk.sock 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3996511 ']' 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.501 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.501 [2024-07-21 08:05:52.105097] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:42.501 [2024-07-21 08:05:52.105182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996511 ] 00:09:42.758 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.758 [2024-07-21 08:05:52.164276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.758 [2024-07-21 08:05:52.254187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.758 [2024-07-21 08:05:52.254241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.758 [2024-07-21 08:05:52.254244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3996526 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3996526 /var/tmp/spdk2.sock 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3996526 /var/tmp/spdk2.sock 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3996526 /var/tmp/spdk2.sock 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3996526 ']' 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:43.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.015 08:05:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.015 [2024-07-21 08:05:52.555760] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:43.015 [2024-07-21 08:05:52.555858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996526 ] 00:09:43.015 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.015 [2024-07-21 08:05:52.643791] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3996511 has claimed it. 00:09:43.015 [2024-07-21 08:05:52.643849] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:43.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3996526) - No such process 00:09:43.945 ERROR: process (pid: 3996526) is no longer running 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3996511 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3996511 ']' 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3996511 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996511 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.945 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.946 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996511' 00:09:43.946 killing process with pid 3996511 00:09:43.946 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3996511 00:09:43.946 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3996511 00:09:44.203 00:09:44.203 real 0m1.625s 00:09:44.203 user 0m4.397s 00:09:44.203 sys 0m0.455s 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.203 ************************************ 00:09:44.203 END TEST locking_overlapped_coremask 00:09:44.203 ************************************ 00:09:44.203 08:05:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:44.203 08:05:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:44.203 08:05:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.203 08:05:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.203 08:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:44.203 ************************************ 00:09:44.203 START TEST locking_overlapped_coremask_via_rpc 00:09:44.203 ************************************ 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3996688 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3996688 /var/tmp/spdk.sock 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3996688 ']' 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.203 08:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.203 [2024-07-21 08:05:53.781801] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:44.203 [2024-07-21 08:05:53.781895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996688 ] 00:09:44.203 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.460 [2024-07-21 08:05:53.843482] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:44.460 [2024-07-21 08:05:53.843520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.460 [2024-07-21 08:05:53.934029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.460 [2024-07-21 08:05:53.934082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.460 [2024-07-21 08:05:53.934086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3996758 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3996758 /var/tmp/spdk2.sock 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3996758 ']' 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:44.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.717 08:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.717 [2024-07-21 08:05:54.230286] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:44.717 [2024-07-21 08:05:54.230383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996758 ] 00:09:44.717 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.717 [2024-07-21 08:05:54.320826] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:44.717 [2024-07-21 08:05:54.320866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.974 [2024-07-21 08:05:54.497355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.974 [2024-07-21 08:05:54.497413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.974 [2024-07-21 08:05:54.497415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.904 [2024-07-21 08:05:55.182724] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3996688 has claimed it. 00:09:45.904 request: 00:09:45.904 { 00:09:45.904 "method": "framework_enable_cpumask_locks", 00:09:45.904 "req_id": 1 00:09:45.904 } 00:09:45.904 Got JSON-RPC error response 00:09:45.904 response: 00:09:45.904 { 00:09:45.904 "code": -32603, 00:09:45.904 "message": "Failed to claim CPU core: 2" 00:09:45.904 } 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3996688 /var/tmp/spdk.sock 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3996688 ']' 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.904 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3996758 /var/tmp/spdk2.sock 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3996758 ']' 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.905 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:46.162 00:09:46.162 real 0m1.971s 00:09:46.162 user 0m1.015s 00:09:46.162 sys 0m0.188s 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.162 08:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 ************************************ 00:09:46.162 END TEST locking_overlapped_coremask_via_rpc 00:09:46.162 ************************************ 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:46.162 08:05:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:46.162 08:05:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3996688 ]] 00:09:46.162 08:05:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3996688 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3996688 ']' 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3996688 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996688 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996688' 00:09:46.162 killing process with pid 3996688 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3996688 00:09:46.162 08:05:55 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3996688 00:09:46.726 08:05:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3996758 ]] 00:09:46.726 08:05:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3996758 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3996758 ']' 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3996758 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996758 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996758' 00:09:46.726 killing process with pid 3996758 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3996758 00:09:46.726 08:05:56 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3996758 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3996688 ]] 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3996688 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3996688 ']' 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3996688 00:09:46.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3996688) - No such process 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3996688 is not found' 00:09:46.983 Process with pid 3996688 is not found 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3996758 ]] 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3996758 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3996758 ']' 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3996758 00:09:46.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3996758) - No such process 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3996758 is not found' 00:09:46.983 Process with pid 3996758 is not found 00:09:46.983 08:05:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:46.983 00:09:46.983 real 0m15.834s 00:09:46.983 user 0m27.486s 00:09:46.983 sys 0m5.448s 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.983 08:05:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:46.983 ************************************ 00:09:46.983 END TEST cpu_locks 00:09:46.983 ************************************ 00:09:46.983 08:05:56 event -- common/autotest_common.sh@1142 -- # return 0 00:09:46.983 00:09:46.983 real 0m40.638s 00:09:46.983 user 1m16.750s 00:09:46.983 sys 0m9.433s 00:09:46.983 08:05:56 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.240 08:05:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:47.241 ************************************ 00:09:47.241 END TEST event 00:09:47.241 ************************************ 00:09:47.241 08:05:56 -- common/autotest_common.sh@1142 -- # return 0 00:09:47.241 08:05:56 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:47.241 08:05:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:47.241 08:05:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.241 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:47.241 ************************************ 00:09:47.241 START TEST thread 00:09:47.241 ************************************ 00:09:47.241 08:05:56 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:47.241 * Looking for test storage... 00:09:47.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:47.241 08:05:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:47.241 08:05:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:47.241 08:05:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.241 08:05:56 thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.241 ************************************ 00:09:47.241 START TEST thread_poller_perf 00:09:47.241 ************************************ 00:09:47.241 08:05:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:47.241 [2024-07-21 08:05:56.750835] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:47.241 [2024-07-21 08:05:56.750899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997187 ] 00:09:47.241 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.241 [2024-07-21 08:05:56.811669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.498 [2024-07-21 08:05:56.902439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.498 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:48.428 ====================================== 00:09:48.428 busy:2709745755 (cyc) 00:09:48.428 total_run_count: 295000 00:09:48.428 tsc_hz: 2700000000 (cyc) 00:09:48.428 ====================================== 00:09:48.428 poller_cost: 9185 (cyc), 3401 (nsec) 00:09:48.428 00:09:48.428 real 0m1.255s 00:09:48.428 user 0m1.169s 00:09:48.428 sys 0m0.080s 00:09:48.428 08:05:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.428 08:05:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:48.428 ************************************ 00:09:48.428 END TEST thread_poller_perf 00:09:48.428 ************************************ 00:09:48.428 08:05:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:48.428 08:05:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:48.428 08:05:58 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:48.428 08:05:58 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.428 08:05:58 thread -- common/autotest_common.sh@10 -- # set +x 00:09:48.428 ************************************ 00:09:48.428 START TEST thread_poller_perf 00:09:48.428 ************************************ 00:09:48.428 08:05:58 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:48.428 [2024-07-21 08:05:58.050564] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:48.428 [2024-07-21 08:05:58.050635] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997342 ] 00:09:48.684 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.684 [2024-07-21 08:05:58.112317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.684 [2024-07-21 08:05:58.205250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.685 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:50.051 ====================================== 00:09:50.051 busy:2702921785 (cyc) 00:09:50.051 total_run_count: 3936000 00:09:50.051 tsc_hz: 2700000000 (cyc) 00:09:50.051 ====================================== 00:09:50.051 poller_cost: 686 (cyc), 254 (nsec) 00:09:50.051 00:09:50.051 real 0m1.249s 00:09:50.051 user 0m1.165s 00:09:50.051 sys 0m0.078s 00:09:50.051 08:05:59 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.051 08:05:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:50.051 ************************************ 00:09:50.051 END TEST thread_poller_perf 00:09:50.051 ************************************ 00:09:50.051 08:05:59 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:50.051 08:05:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:50.051 00:09:50.051 real 0m2.651s 00:09:50.051 user 0m2.400s 00:09:50.051 sys 0m0.250s 00:09:50.051 08:05:59 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.051 08:05:59 thread -- common/autotest_common.sh@10 -- # set +x 00:09:50.051 ************************************ 00:09:50.051 END TEST thread 00:09:50.051 ************************************ 00:09:50.051 08:05:59 -- common/autotest_common.sh@1142 -- # return 0 00:09:50.051 08:05:59 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:50.051 08:05:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.051 08:05:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.051 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.051 ************************************ 00:09:50.051 START TEST accel 00:09:50.051 ************************************ 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:50.051 * Looking for test storage... 00:09:50.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:50.051 08:05:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:50.051 08:05:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:50.051 08:05:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:50.051 08:05:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3997533 00:09:50.051 08:05:59 accel -- accel/accel.sh@63 -- # waitforlisten 3997533 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@829 -- # '[' -z 3997533 ']' 00:09:50.051 08:05:59 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.051 08:05:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.051 08:05:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.051 08:05:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.051 08:05:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.051 08:05:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.051 08:05:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.051 08:05:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.051 08:05:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:50.051 08:05:59 accel -- accel/accel.sh@41 -- # jq -r . 00:09:50.051 [2024-07-21 08:05:59.461676] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:50.051 [2024-07-21 08:05:59.461779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997533 ] 00:09:50.051 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.051 [2024-07-21 08:05:59.521088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.051 [2024-07-21 08:05:59.611249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@862 -- # return 0 00:09:50.309 08:05:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:50.309 08:05:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:50.309 08:05:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:50.309 08:05:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:50.309 08:05:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:50.309 08:05:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.309 08:05:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.309 08:05:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.309 08:05:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.309 08:05:59 accel -- accel/accel.sh@75 -- # killprocess 3997533 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@948 -- # '[' -z 3997533 ']' 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@952 -- # kill -0 3997533 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@953 -- # uname 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.309 08:05:59 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3997533 00:09:50.566 08:05:59 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.566 08:05:59 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.566 08:05:59 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3997533' 00:09:50.566 killing process with pid 3997533 00:09:50.566 08:05:59 accel -- common/autotest_common.sh@967 -- # kill 3997533 00:09:50.566 08:05:59 accel -- common/autotest_common.sh@972 -- # wait 3997533 00:09:50.823 08:06:00 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:50.823 08:06:00 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 08:06:00 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:50.824 08:06:00 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:50.824 08:06:00 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.824 08:06:00 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:50.824 08:06:00 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.824 08:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.824 ************************************ 00:09:50.824 START TEST accel_missing_filename 00:09:50.824 ************************************ 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.824 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:50.824 08:06:00 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:50.824 [2024-07-21 08:06:00.428117] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:50.824 [2024-07-21 08:06:00.428183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997733 ] 00:09:51.081 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.081 [2024-07-21 08:06:00.489400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.081 [2024-07-21 08:06:00.581098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.081 [2024-07-21 08:06:00.639505] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:51.339 [2024-07-21 08:06:00.720983] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:51.339 A filename is required. 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.339 00:09:51.339 real 0m0.390s 00:09:51.339 user 0m0.289s 00:09:51.339 sys 0m0.135s 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.339 08:06:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:51.339 ************************************ 00:09:51.339 END TEST accel_missing_filename 00:09:51.339 ************************************ 00:09:51.339 08:06:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:51.339 08:06:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:51.339 08:06:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:51.339 08:06:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.339 08:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.339 ************************************ 00:09:51.339 START TEST accel_compress_verify 00:09:51.339 ************************************ 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.339 08:06:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:51.339 08:06:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:51.339 [2024-07-21 08:06:00.864007] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:51.339 [2024-07-21 08:06:00.864062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997786 ] 00:09:51.339 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.339 [2024-07-21 08:06:00.923650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.596 [2024-07-21 08:06:01.018066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.596 [2024-07-21 08:06:01.077163] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:51.596 [2024-07-21 08:06:01.150842] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:51.596 00:09:51.596 Compression does not support the verify option, aborting. 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.596 00:09:51.596 real 0m0.379s 00:09:51.596 user 0m0.276s 00:09:51.596 sys 0m0.135s 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.596 08:06:01 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:51.596 ************************************ 00:09:51.596 END TEST accel_compress_verify 00:09:51.596 ************************************ 00:09:51.854 08:06:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:51.854 08:06:01 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:51.854 08:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:51.854 08:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.854 08:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.854 ************************************ 00:09:51.854 START TEST accel_wrong_workload 00:09:51.854 ************************************ 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:51.854 08:06:01 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:51.854 Unsupported workload type: foobar 00:09:51.854 [2024-07-21 08:06:01.290383] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:51.854 accel_perf options: 00:09:51.854 [-h help message] 00:09:51.854 [-q queue depth per core] 00:09:51.854 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:51.854 [-T number of threads per core 00:09:51.854 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:51.854 [-t time in seconds] 00:09:51.854 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:51.854 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:51.854 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:51.854 [-l for compress/decompress workloads, name of uncompressed input file 00:09:51.854 [-S for crc32c workload, use this seed value (default 0) 00:09:51.854 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:51.854 [-f for fill workload, use this BYTE value (default 255) 00:09:51.854 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:51.854 [-y verify result if this switch is on] 00:09:51.854 [-a tasks to allocate per core (default: same value as -q)] 00:09:51.854 Can be used to spread operations across a wider range of memory. 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.854 00:09:51.854 real 0m0.023s 00:09:51.854 user 0m0.015s 00:09:51.854 sys 0m0.008s 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.854 08:06:01 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:51.854 ************************************ 00:09:51.854 END TEST accel_wrong_workload 00:09:51.854 ************************************ 00:09:51.854 Error: writing output failed: Broken pipe 00:09:51.854 08:06:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:51.855 08:06:01 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 ************************************ 00:09:51.855 START TEST accel_negative_buffers 00:09:51.855 ************************************ 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:51.855 08:06:01 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:51.855 -x option must be non-negative. 00:09:51.855 [2024-07-21 08:06:01.355489] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:51.855 accel_perf options: 00:09:51.855 [-h help message] 00:09:51.855 [-q queue depth per core] 00:09:51.855 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:51.855 [-T number of threads per core 00:09:51.855 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:51.855 [-t time in seconds] 00:09:51.855 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:51.855 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:51.855 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:51.855 [-l for compress/decompress workloads, name of uncompressed input file 00:09:51.855 [-S for crc32c workload, use this seed value (default 0) 00:09:51.855 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:51.855 [-f for fill workload, use this BYTE value (default 255) 00:09:51.855 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:51.855 [-y verify result if this switch is on] 00:09:51.855 [-a tasks to allocate per core (default: same value as -q)] 00:09:51.855 Can be used to spread operations across a wider range of memory. 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.855 00:09:51.855 real 0m0.021s 00:09:51.855 user 0m0.013s 00:09:51.855 sys 0m0.008s 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.855 08:06:01 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 ************************************ 00:09:51.855 END TEST accel_negative_buffers 00:09:51.855 ************************************ 00:09:51.855 Error: writing output failed: Broken pipe 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:51.855 08:06:01 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.855 08:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 ************************************ 00:09:51.855 START TEST accel_crc32c 00:09:51.855 ************************************ 00:09:51.855 08:06:01 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:51.855 08:06:01 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:51.855 [2024-07-21 08:06:01.418457] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:51.855 [2024-07-21 08:06:01.418522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998000 ] 00:09:51.855 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.855 [2024-07-21 08:06:01.479020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.114 [2024-07-21 08:06:01.572327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.114 08:06:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:53.489 08:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:53.490 08:06:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:53.490 08:06:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:53.490 00:09:53.490 real 0m1.402s 00:09:53.490 user 0m1.264s 00:09:53.490 sys 0m0.141s 00:09:53.490 08:06:02 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.490 08:06:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 ************************************ 00:09:53.490 END TEST accel_crc32c 00:09:53.490 ************************************ 00:09:53.490 08:06:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:53.490 08:06:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:53.490 08:06:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:53.490 08:06:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.490 08:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 ************************************ 00:09:53.490 START TEST accel_crc32c_C2 00:09:53.490 ************************************ 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:53.490 08:06:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:53.490 [2024-07-21 08:06:02.857198] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:53.490 [2024-07-21 08:06:02.857251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998185 ] 00:09:53.490 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.490 [2024-07-21 08:06:02.913405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.490 [2024-07-21 08:06:02.999999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.490 08:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.855 00:09:54.855 real 0m1.380s 00:09:54.855 user 0m1.254s 00:09:54.855 sys 0m0.128s 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.855 08:06:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 ************************************ 00:09:54.855 END TEST accel_crc32c_C2 00:09:54.855 ************************************ 00:09:54.855 08:06:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:54.855 08:06:04 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:54.855 08:06:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:54.855 08:06:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.855 08:06:04 accel -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 ************************************ 00:09:54.855 START TEST accel_copy 00:09:54.855 ************************************ 00:09:54.855 08:06:04 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:54.855 [2024-07-21 08:06:04.273413] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:54.855 [2024-07-21 08:06:04.273467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998338 ] 00:09:54.855 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.855 [2024-07-21 08:06:04.331676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.855 [2024-07-21 08:06:04.416298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.855 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.856 08:06:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.233 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:56.234 08:06:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:56.234 00:09:56.234 real 0m1.370s 00:09:56.234 user 0m1.238s 00:09:56.234 sys 0m0.134s 00:09:56.234 08:06:05 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.234 08:06:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:56.234 ************************************ 00:09:56.234 END TEST accel_copy 00:09:56.234 ************************************ 00:09:56.234 08:06:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:56.234 08:06:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:56.234 08:06:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:56.234 08:06:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.234 08:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.234 ************************************ 00:09:56.234 START TEST accel_fill 00:09:56.234 ************************************ 00:09:56.234 08:06:05 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:56.234 08:06:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:56.234 [2024-07-21 08:06:05.691713] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:56.234 [2024-07-21 08:06:05.691776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998614 ] 00:09:56.234 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.234 [2024-07-21 08:06:05.749778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.234 [2024-07-21 08:06:05.838319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:56.492 08:06:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:57.865 08:06:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.865 00:09:57.865 real 0m1.389s 00:09:57.865 user 0m1.252s 00:09:57.865 sys 0m0.139s 00:09:57.865 08:06:07 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.865 08:06:07 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:57.865 ************************************ 00:09:57.865 END TEST accel_fill 00:09:57.865 ************************************ 00:09:57.865 08:06:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:57.865 08:06:07 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:57.865 08:06:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:57.865 08:06:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.865 08:06:07 accel -- common/autotest_common.sh@10 -- # set +x 00:09:57.865 ************************************ 00:09:57.865 START TEST accel_copy_crc32c 00:09:57.865 ************************************ 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:57.865 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:57.865 [2024-07-21 08:06:07.128272] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:57.865 [2024-07-21 08:06:07.128336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998771 ] 00:09:57.866 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.866 [2024-07-21 08:06:07.190187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.866 [2024-07-21 08:06:07.285399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.866 08:06:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.239 00:09:59.239 real 0m1.402s 00:09:59.239 user 0m1.254s 00:09:59.239 sys 0m0.150s 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.239 08:06:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:59.239 ************************************ 00:09:59.239 END TEST accel_copy_crc32c 00:09:59.239 ************************************ 00:09:59.239 08:06:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:59.239 08:06:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:59.239 08:06:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:59.239 08:06:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.239 08:06:08 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.239 ************************************ 00:09:59.239 START TEST accel_copy_crc32c_C2 00:09:59.239 ************************************ 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:59.239 [2024-07-21 08:06:08.572589] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:09:59.239 [2024-07-21 08:06:08.572678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999341 ] 00:09:59.239 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.239 [2024-07-21 08:06:08.633938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.239 [2024-07-21 08:06:08.728195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.239 08:06:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:00.607 00:10:00.607 real 0m1.405s 00:10:00.607 user 0m1.270s 00:10:00.607 sys 0m0.138s 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.607 08:06:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:00.607 ************************************ 00:10:00.607 END TEST accel_copy_crc32c_C2 00:10:00.607 ************************************ 00:10:00.607 08:06:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:00.607 08:06:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:00.607 08:06:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:00.607 08:06:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.607 08:06:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:00.607 ************************************ 00:10:00.607 START TEST accel_dualcast 00:10:00.607 ************************************ 00:10:00.607 08:06:10 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:00.607 08:06:10 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:00.607 [2024-07-21 08:06:10.028365] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:00.607 [2024-07-21 08:06:10.028448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999634 ] 00:10:00.607 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.607 [2024-07-21 08:06:10.095501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.607 [2024-07-21 08:06:10.189183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:00.864 08:06:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.794 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:01.795 08:06:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.795 00:10:01.795 real 0m1.408s 00:10:01.795 user 0m1.256s 00:10:01.795 sys 0m0.154s 00:10:01.795 08:06:11 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.795 08:06:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:01.795 ************************************ 00:10:01.795 END TEST accel_dualcast 00:10:01.795 ************************************ 00:10:02.052 08:06:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:02.052 08:06:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:02.052 08:06:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:02.052 08:06:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.052 08:06:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:02.052 ************************************ 00:10:02.052 START TEST accel_compare 00:10:02.052 ************************************ 00:10:02.052 08:06:11 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:02.052 08:06:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:02.052 [2024-07-21 08:06:11.480783] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:02.052 [2024-07-21 08:06:11.480845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999859 ] 00:10:02.052 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.052 [2024-07-21 08:06:11.542066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.052 [2024-07-21 08:06:11.635283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.310 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:02.311 08:06:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:03.244 08:06:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.244 00:10:03.244 real 0m1.405s 00:10:03.244 user 0m1.262s 00:10:03.244 sys 0m0.145s 00:10:03.244 08:06:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.244 08:06:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:03.244 ************************************ 00:10:03.244 END TEST accel_compare 00:10:03.244 ************************************ 00:10:03.501 08:06:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:03.501 08:06:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:03.501 08:06:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:03.501 08:06:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.501 08:06:12 accel -- common/autotest_common.sh@10 -- # set +x 00:10:03.501 ************************************ 00:10:03.501 START TEST accel_xor 00:10:03.501 ************************************ 00:10:03.501 08:06:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:10:03.501 08:06:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:03.501 08:06:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:03.502 08:06:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:03.502 [2024-07-21 08:06:12.931353] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:03.502 [2024-07-21 08:06:12.931406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000021 ] 00:10:03.502 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.502 [2024-07-21 08:06:12.991850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.502 [2024-07-21 08:06:13.084857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.759 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:03.760 08:06:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:04.691 08:06:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.691 00:10:04.691 real 0m1.404s 00:10:04.691 user 0m1.258s 00:10:04.691 sys 0m0.149s 00:10:04.691 08:06:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.691 08:06:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:04.691 ************************************ 00:10:04.691 END TEST accel_xor 00:10:04.691 ************************************ 00:10:04.948 08:06:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:04.948 08:06:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:04.948 08:06:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:04.948 08:06:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.948 08:06:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:04.948 ************************************ 00:10:04.948 START TEST accel_xor 00:10:04.948 ************************************ 00:10:04.948 08:06:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:04.948 08:06:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:04.949 [2024-07-21 08:06:14.384481] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:04.949 [2024-07-21 08:06:14.384545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000175 ] 00:10:04.949 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.949 [2024-07-21 08:06:14.444979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.949 [2024-07-21 08:06:14.542434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:05.206 08:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:06.135 08:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.135 00:10:06.135 real 0m1.398s 00:10:06.135 user 0m1.255s 00:10:06.135 sys 0m0.145s 00:10:06.135 08:06:15 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.135 08:06:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:06.396 ************************************ 00:10:06.396 END TEST accel_xor 00:10:06.396 ************************************ 00:10:06.396 08:06:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:06.396 08:06:15 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:06.396 08:06:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:06.396 08:06:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.396 08:06:15 accel -- common/autotest_common.sh@10 -- # set +x 00:10:06.396 ************************************ 00:10:06.396 START TEST accel_dif_verify 00:10:06.396 ************************************ 00:10:06.396 08:06:15 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:06.396 08:06:15 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:06.396 [2024-07-21 08:06:15.828080] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:06.396 [2024-07-21 08:06:15.828146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000449 ] 00:10:06.396 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.396 [2024-07-21 08:06:15.891761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.396 [2024-07-21 08:06:15.988535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.668 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:06.669 08:06:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:07.600 08:06:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.600 00:10:07.600 real 0m1.398s 00:10:07.600 user 0m1.253s 00:10:07.600 sys 0m0.149s 00:10:07.600 08:06:17 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.600 08:06:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:07.600 ************************************ 00:10:07.600 END TEST accel_dif_verify 00:10:07.600 ************************************ 00:10:07.600 08:06:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:07.600 08:06:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:07.600 08:06:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:07.600 08:06:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.600 08:06:17 accel -- common/autotest_common.sh@10 -- # set +x 00:10:07.858 ************************************ 00:10:07.858 START TEST accel_dif_generate 00:10:07.858 ************************************ 00:10:07.858 08:06:17 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:07.858 [2024-07-21 08:06:17.265061] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:07.858 [2024-07-21 08:06:17.265114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000607 ] 00:10:07.858 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.858 [2024-07-21 08:06:17.324424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.858 [2024-07-21 08:06:17.417300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.858 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:07.859 08:06:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:09.229 08:06:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.229 00:10:09.229 real 0m1.399s 00:10:09.229 user 0m1.258s 00:10:09.229 sys 0m0.145s 00:10:09.229 08:06:18 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.229 08:06:18 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:09.229 ************************************ 00:10:09.229 END TEST accel_dif_generate 00:10:09.229 ************************************ 00:10:09.229 08:06:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:09.229 08:06:18 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:09.229 08:06:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:09.229 08:06:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.229 08:06:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.229 ************************************ 00:10:09.229 START TEST accel_dif_generate_copy 00:10:09.229 ************************************ 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:09.229 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:09.229 [2024-07-21 08:06:18.714527] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:09.229 [2024-07-21 08:06:18.714593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000761 ] 00:10:09.229 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.229 [2024-07-21 08:06:18.776990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.486 [2024-07-21 08:06:18.871219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:09.486 08:06:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.852 00:10:10.852 real 0m1.410s 00:10:10.852 user 0m1.263s 00:10:10.852 sys 0m0.150s 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.852 08:06:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:10.852 ************************************ 00:10:10.852 END TEST accel_dif_generate_copy 00:10:10.852 ************************************ 00:10:10.852 08:06:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:10.852 08:06:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:10.852 08:06:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:10.852 08:06:20 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:10.852 08:06:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.853 08:06:20 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.853 ************************************ 00:10:10.853 START TEST accel_comp 00:10:10.853 ************************************ 00:10:10.853 08:06:20 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:10.853 [2024-07-21 08:06:20.170553] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:10.853 [2024-07-21 08:06:20.170622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000942 ] 00:10:10.853 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.853 [2024-07-21 08:06:20.231354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.853 [2024-07-21 08:06:20.323494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:10.853 08:06:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.219 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:12.220 08:06:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.220 00:10:12.220 real 0m1.408s 00:10:12.220 user 0m1.269s 00:10:12.220 sys 0m0.143s 00:10:12.220 08:06:21 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.220 08:06:21 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 ************************************ 00:10:12.220 END TEST accel_comp 00:10:12.220 ************************************ 00:10:12.220 08:06:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:12.220 08:06:21 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:12.220 08:06:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:12.220 08:06:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.220 08:06:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 ************************************ 00:10:12.220 START TEST accel_decomp 00:10:12.220 ************************************ 00:10:12.220 08:06:21 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:12.220 [2024-07-21 08:06:21.623190] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:12.220 [2024-07-21 08:06:21.623244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001188 ] 00:10:12.220 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.220 [2024-07-21 08:06:21.682734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.220 [2024-07-21 08:06:21.773259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:06:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.590 08:06:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.591 08:06:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.591 08:06:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:13.591 08:06:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.591 00:10:13.591 real 0m1.385s 00:10:13.591 user 0m1.256s 00:10:13.591 sys 0m0.132s 00:10:13.591 08:06:22 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.591 08:06:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 ************************************ 00:10:13.591 END TEST accel_decomp 00:10:13.591 ************************************ 00:10:13.591 08:06:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:13.591 08:06:23 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.591 08:06:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:13.591 08:06:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.591 08:06:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.591 ************************************ 00:10:13.591 START TEST accel_decomp_full 00:10:13.591 ************************************ 00:10:13.591 08:06:23 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:13.591 08:06:23 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:13.591 [2024-07-21 08:06:23.057430] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:13.591 [2024-07-21 08:06:23.057498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001353 ] 00:10:13.591 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.591 [2024-07-21 08:06:23.118674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.591 [2024-07-21 08:06:23.211770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.848 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.849 08:06:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:15.220 08:06:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.220 00:10:15.220 real 0m1.412s 00:10:15.220 user 0m1.274s 00:10:15.220 sys 0m0.142s 00:10:15.220 08:06:24 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.220 08:06:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:15.220 ************************************ 00:10:15.220 END TEST accel_decomp_full 00:10:15.220 ************************************ 00:10:15.220 08:06:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:15.220 08:06:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:15.220 08:06:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:15.220 08:06:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.220 08:06:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.220 ************************************ 00:10:15.220 START TEST accel_decomp_mcore 00:10:15.220 ************************************ 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:15.220 [2024-07-21 08:06:24.517220] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:15.220 [2024-07-21 08:06:24.517284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001509 ] 00:10:15.220 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.220 [2024-07-21 08:06:24.579291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.220 [2024-07-21 08:06:24.675843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.220 [2024-07-21 08:06:24.675897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.220 [2024-07-21 08:06:24.676016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.220 [2024-07-21 08:06:24.676019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:15.220 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.221 08:06:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.589 00:10:16.589 real 0m1.409s 00:10:16.589 user 0m4.706s 00:10:16.589 sys 0m0.151s 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.589 08:06:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:16.589 ************************************ 00:10:16.589 END TEST accel_decomp_mcore 00:10:16.589 ************************************ 00:10:16.589 08:06:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:16.589 08:06:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:16.589 08:06:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:16.589 08:06:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.589 08:06:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.589 ************************************ 00:10:16.589 START TEST accel_decomp_full_mcore 00:10:16.589 ************************************ 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:16.589 08:06:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:16.589 [2024-07-21 08:06:25.974805] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:16.589 [2024-07-21 08:06:25.974867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001783 ] 00:10:16.589 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.589 [2024-07-21 08:06:26.036491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.589 [2024-07-21 08:06:26.132676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.589 [2024-07-21 08:06:26.132731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.589 [2024-07-21 08:06:26.132847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.589 [2024-07-21 08:06:26.132850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.589 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:16.590 08:06:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.958 00:10:17.958 real 0m1.431s 00:10:17.958 user 0m4.773s 00:10:17.958 sys 0m0.156s 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.958 08:06:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:17.958 ************************************ 00:10:17.958 END TEST accel_decomp_full_mcore 00:10:17.958 ************************************ 00:10:17.958 08:06:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:17.958 08:06:27 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:17.958 08:06:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:17.958 08:06:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.958 08:06:27 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.958 ************************************ 00:10:17.958 START TEST accel_decomp_mthread 00:10:17.958 ************************************ 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:17.958 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:17.958 [2024-07-21 08:06:27.455913] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:17.958 [2024-07-21 08:06:27.455984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001944 ] 00:10:17.958 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.958 [2024-07-21 08:06:27.517717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.215 [2024-07-21 08:06:27.611414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.215 08:06:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.598 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.599 00:10:19.599 real 0m1.417s 00:10:19.599 user 0m1.268s 00:10:19.599 sys 0m0.152s 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.599 08:06:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:19.599 ************************************ 00:10:19.599 END TEST accel_decomp_mthread 00:10:19.599 ************************************ 00:10:19.599 08:06:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:19.599 08:06:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:19.599 08:06:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:19.599 08:06:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.599 08:06:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.599 ************************************ 00:10:19.599 START TEST accel_decomp_full_mthread 00:10:19.599 ************************************ 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:19.599 08:06:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:19.599 [2024-07-21 08:06:28.918018] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:19.599 [2024-07-21 08:06:28.918084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002104 ] 00:10:19.599 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.599 [2024-07-21 08:06:28.979429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.599 [2024-07-21 08:06:29.071065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.599 08:06:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.987 00:10:20.987 real 0m1.435s 00:10:20.987 user 0m1.295s 00:10:20.987 sys 0m0.144s 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.987 08:06:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:20.987 ************************************ 00:10:20.987 END TEST accel_decomp_full_mthread 00:10:20.987 ************************************ 00:10:20.987 08:06:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:20.987 08:06:30 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:20.987 08:06:30 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:20.987 08:06:30 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:20.987 08:06:30 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:20.987 08:06:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:20.987 08:06:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.987 08:06:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:20.987 08:06:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:20.987 08:06:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.987 08:06:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.987 08:06:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:20.987 08:06:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:20.987 08:06:30 accel -- accel/accel.sh@41 -- # jq -r . 00:10:20.987 ************************************ 00:10:20.987 START TEST accel_dif_functional_tests 00:10:20.987 ************************************ 00:10:20.987 08:06:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:20.987 [2024-07-21 08:06:30.419688] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:20.987 [2024-07-21 08:06:30.419750] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002261 ] 00:10:20.987 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.987 [2024-07-21 08:06:30.479712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.987 [2024-07-21 08:06:30.574082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.987 [2024-07-21 08:06:30.574151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.987 [2024-07-21 08:06:30.574154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.262 00:10:21.262 00:10:21.262 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.262 http://cunit.sourceforge.net/ 00:10:21.262 00:10:21.262 00:10:21.262 Suite: accel_dif 00:10:21.262 Test: verify: DIF generated, GUARD check ...passed 00:10:21.263 Test: verify: DIF generated, APPTAG check ...passed 00:10:21.263 Test: verify: DIF generated, REFTAG check ...passed 00:10:21.263 Test: verify: DIF not generated, GUARD check ...[2024-07-21 08:06:30.668712] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:21.263 passed 00:10:21.263 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 08:06:30.668786] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:21.263 passed 00:10:21.263 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 08:06:30.668818] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:21.263 passed 00:10:21.263 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:21.263 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 08:06:30.668880] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:21.263 passed 00:10:21.263 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:21.263 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:21.263 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:21.263 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 08:06:30.669018] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:21.263 passed 00:10:21.263 Test: verify copy: DIF generated, GUARD check ...passed 00:10:21.263 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:21.263 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:21.263 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 08:06:30.669171] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:21.263 passed 00:10:21.263 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 08:06:30.669211] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:21.263 passed 00:10:21.263 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 08:06:30.669250] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:21.263 passed 00:10:21.263 Test: generate copy: DIF generated, GUARD check ...passed 00:10:21.263 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:21.263 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:21.263 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:21.263 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:21.263 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:21.263 Test: generate copy: iovecs-len validate ...[2024-07-21 08:06:30.669457] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:21.263 passed 00:10:21.263 Test: generate copy: buffer alignment validate ...passed 00:10:21.263 00:10:21.263 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.263 suites 1 1 n/a 0 0 00:10:21.263 tests 26 26 26 0 0 00:10:21.263 asserts 115 115 115 0 n/a 00:10:21.263 00:10:21.263 Elapsed time = 0.002 seconds 00:10:21.263 00:10:21.263 real 0m0.493s 00:10:21.263 user 0m0.771s 00:10:21.263 sys 0m0.180s 00:10:21.263 08:06:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.263 08:06:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:21.263 ************************************ 00:10:21.263 END TEST accel_dif_functional_tests 00:10:21.263 ************************************ 00:10:21.521 08:06:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:21.521 00:10:21.521 real 0m31.541s 00:10:21.521 user 0m35.018s 00:10:21.521 sys 0m4.543s 00:10:21.521 08:06:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.521 08:06:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 ************************************ 00:10:21.521 END TEST accel 00:10:21.521 ************************************ 00:10:21.521 08:06:30 -- common/autotest_common.sh@1142 -- # return 0 00:10:21.521 08:06:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:21.521 08:06:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.521 08:06:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.521 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 ************************************ 00:10:21.521 START TEST accel_rpc 00:10:21.521 ************************************ 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:21.521 * Looking for test storage... 00:10:21.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:21.521 08:06:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:21.521 08:06:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4002451 00:10:21.521 08:06:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:21.521 08:06:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4002451 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4002451 ']' 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.521 08:06:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 [2024-07-21 08:06:31.039518] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:21.521 [2024-07-21 08:06:31.039639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002451 ] 00:10:21.521 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.521 [2024-07-21 08:06:31.097304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.788 [2024-07-21 08:06:31.181723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.788 08:06:31 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.788 08:06:31 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:21.788 08:06:31 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:21.788 08:06:31 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:21.788 08:06:31 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:21.788 08:06:31 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:21.788 08:06:31 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:21.788 08:06:31 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.788 08:06:31 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.788 08:06:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.788 ************************************ 00:10:21.788 START TEST accel_assign_opcode 00:10:21.788 ************************************ 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:21.788 [2024-07-21 08:06:31.266367] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:21.788 [2024-07-21 08:06:31.274378] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.788 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.046 software 00:10:22.046 00:10:22.046 real 0m0.296s 00:10:22.046 user 0m0.037s 00:10:22.046 sys 0m0.009s 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.046 08:06:31 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:22.046 ************************************ 00:10:22.046 END TEST accel_assign_opcode 00:10:22.046 ************************************ 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:22.046 08:06:31 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4002451 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4002451 ']' 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4002451 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4002451 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4002451' 00:10:22.046 killing process with pid 4002451 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@967 -- # kill 4002451 00:10:22.046 08:06:31 accel_rpc -- common/autotest_common.sh@972 -- # wait 4002451 00:10:22.611 00:10:22.611 real 0m1.079s 00:10:22.611 user 0m1.011s 00:10:22.611 sys 0m0.416s 00:10:22.611 08:06:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.611 08:06:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.611 ************************************ 00:10:22.611 END TEST accel_rpc 00:10:22.611 ************************************ 00:10:22.611 08:06:32 -- common/autotest_common.sh@1142 -- # return 0 00:10:22.611 08:06:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:22.611 08:06:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:22.611 08:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.611 08:06:32 -- common/autotest_common.sh@10 -- # set +x 00:10:22.611 ************************************ 00:10:22.611 START TEST app_cmdline 00:10:22.611 ************************************ 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:22.611 * Looking for test storage... 00:10:22.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:22.611 08:06:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:22.611 08:06:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4002655 00:10:22.611 08:06:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:22.611 08:06:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4002655 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4002655 ']' 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.611 08:06:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:22.611 [2024-07-21 08:06:32.173441] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:22.611 [2024-07-21 08:06:32.173518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002655 ] 00:10:22.611 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.611 [2024-07-21 08:06:32.233332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.869 [2024-07-21 08:06:32.324773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.126 08:06:32 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.127 08:06:32 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:10:23.127 08:06:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:23.384 { 00:10:23.384 "version": "SPDK v24.09-pre git sha1 89fd17309", 00:10:23.384 "fields": { 00:10:23.384 "major": 24, 00:10:23.384 "minor": 9, 00:10:23.384 "patch": 0, 00:10:23.384 "suffix": "-pre", 00:10:23.384 "commit": "89fd17309" 00:10:23.384 } 00:10:23.384 } 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:23.384 08:06:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:23.384 08:06:32 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:23.640 request: 00:10:23.640 { 00:10:23.640 "method": "env_dpdk_get_mem_stats", 00:10:23.640 "req_id": 1 00:10:23.640 } 00:10:23.640 Got JSON-RPC error response 00:10:23.640 response: 00:10:23.640 { 00:10:23.640 "code": -32601, 00:10:23.641 "message": "Method not found" 00:10:23.641 } 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:23.641 08:06:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4002655 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4002655 ']' 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4002655 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4002655 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4002655' 00:10:23.641 killing process with pid 4002655 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@967 -- # kill 4002655 00:10:23.641 08:06:33 app_cmdline -- common/autotest_common.sh@972 -- # wait 4002655 00:10:24.204 00:10:24.204 real 0m1.496s 00:10:24.204 user 0m1.824s 00:10:24.204 sys 0m0.482s 00:10:24.204 08:06:33 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.204 08:06:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:24.204 ************************************ 00:10:24.204 END TEST app_cmdline 00:10:24.204 ************************************ 00:10:24.204 08:06:33 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.204 08:06:33 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:24.204 08:06:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:24.204 08:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.204 08:06:33 -- common/autotest_common.sh@10 -- # set +x 00:10:24.204 ************************************ 00:10:24.204 START TEST version 00:10:24.204 ************************************ 00:10:24.204 08:06:33 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:24.204 * Looking for test storage... 00:10:24.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:24.204 08:06:33 version -- app/version.sh@17 -- # get_header_version major 00:10:24.204 08:06:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # cut -f2 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:24.204 08:06:33 version -- app/version.sh@17 -- # major=24 00:10:24.204 08:06:33 version -- app/version.sh@18 -- # get_header_version minor 00:10:24.204 08:06:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # cut -f2 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:24.204 08:06:33 version -- app/version.sh@18 -- # minor=9 00:10:24.204 08:06:33 version -- app/version.sh@19 -- # get_header_version patch 00:10:24.204 08:06:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # cut -f2 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:24.204 08:06:33 version -- app/version.sh@19 -- # patch=0 00:10:24.204 08:06:33 version -- app/version.sh@20 -- # get_header_version suffix 00:10:24.204 08:06:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # cut -f2 00:10:24.204 08:06:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:24.204 08:06:33 version -- app/version.sh@20 -- # suffix=-pre 00:10:24.204 08:06:33 version -- app/version.sh@22 -- # version=24.9 00:10:24.204 08:06:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:24.204 08:06:33 version -- app/version.sh@28 -- # version=24.9rc0 00:10:24.204 08:06:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:24.204 08:06:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:24.204 08:06:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:24.204 08:06:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:24.204 00:10:24.204 real 0m0.104s 00:10:24.204 user 0m0.058s 00:10:24.204 sys 0m0.068s 00:10:24.204 08:06:33 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.204 08:06:33 version -- common/autotest_common.sh@10 -- # set +x 00:10:24.204 ************************************ 00:10:24.204 END TEST version 00:10:24.204 ************************************ 00:10:24.204 08:06:33 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.204 08:06:33 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@198 -- # uname -s 00:10:24.204 08:06:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:10:24.204 08:06:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:10:24.204 08:06:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:10:24.204 08:06:33 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:24.204 08:06:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.204 08:06:33 -- common/autotest_common.sh@10 -- # set +x 00:10:24.204 08:06:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:24.204 08:06:33 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:24.204 08:06:33 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:24.204 08:06:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:24.204 08:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.204 08:06:33 -- common/autotest_common.sh@10 -- # set +x 00:10:24.204 ************************************ 00:10:24.204 START TEST nvmf_tcp 00:10:24.204 ************************************ 00:10:24.204 08:06:33 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:24.204 * Looking for test storage... 00:10:24.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:24.204 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:24.204 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:24.204 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.462 08:06:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.462 08:06:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.462 08:06:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.462 08:06:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:24.462 08:06:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:24.462 08:06:33 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.462 08:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:24.462 08:06:33 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.462 08:06:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:24.462 08:06:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.462 08:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.462 ************************************ 00:10:24.462 START TEST nvmf_example 00:10:24.462 ************************************ 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.462 * Looking for test storage... 00:10:24.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.462 08:06:33 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:24.463 08:06:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:26.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:26.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:26.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:26.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.361 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.640 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.640 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.640 08:06:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:10:26.640 00:10:26.640 --- 10.0.0.2 ping statistics --- 00:10:26.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.640 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:26.640 00:10:26.640 --- 10.0.0.1 ping statistics --- 00:10:26.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.640 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4004649 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4004649 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4004649 ']' 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.640 08:06:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.640 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.573 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:27.830 08:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:27.830 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.786 Initializing NVMe Controllers 00:10:37.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.786 Initialization complete. Launching workers. 00:10:37.786 ======================================================== 00:10:37.786 Latency(us) 00:10:37.786 Device Information : IOPS MiB/s Average min max 00:10:37.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15164.25 59.24 4220.20 830.48 16104.46 00:10:37.786 ======================================================== 00:10:37.786 Total : 15164.25 59.24 4220.20 830.48 16104.46 00:10:37.786 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.786 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.786 rmmod nvme_tcp 00:10:37.786 rmmod nvme_fabrics 00:10:38.043 rmmod nvme_keyring 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4004649 ']' 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4004649 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4004649 ']' 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4004649 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4004649 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4004649' 00:10:38.043 killing process with pid 4004649 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4004649 00:10:38.043 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4004649 00:10:38.302 nvmf threads initialize successfully 00:10:38.302 bdev subsystem init successfully 00:10:38.302 created a nvmf target service 00:10:38.302 create targets's poll groups done 00:10:38.302 all subsystems of target started 00:10:38.302 nvmf target is running 00:10:38.302 all subsystems of target stopped 00:10:38.302 destroy targets's poll groups done 00:10:38.302 destroyed the nvmf target service 00:10:38.302 bdev subsystem finish successfully 00:10:38.302 nvmf threads destroy successfully 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.302 08:06:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.202 00:10:40.202 real 0m15.900s 00:10:40.202 user 0m44.893s 00:10:40.202 sys 0m3.362s 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.202 08:06:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.202 ************************************ 00:10:40.202 END TEST nvmf_example 00:10:40.202 ************************************ 00:10:40.202 08:06:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.202 08:06:49 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.202 08:06:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.202 08:06:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.202 08:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.202 ************************************ 00:10:40.202 START TEST nvmf_filesystem 00:10:40.202 ************************************ 00:10:40.202 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.462 * Looking for test storage... 00:10:40.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:40.462 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:40.463 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:40.463 #define SPDK_CONFIG_H 00:10:40.463 #define SPDK_CONFIG_APPS 1 00:10:40.463 #define SPDK_CONFIG_ARCH native 00:10:40.463 #undef SPDK_CONFIG_ASAN 00:10:40.463 #undef SPDK_CONFIG_AVAHI 00:10:40.463 #undef SPDK_CONFIG_CET 00:10:40.463 #define SPDK_CONFIG_COVERAGE 1 00:10:40.463 #define SPDK_CONFIG_CROSS_PREFIX 00:10:40.463 #undef SPDK_CONFIG_CRYPTO 00:10:40.463 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:40.463 #undef SPDK_CONFIG_CUSTOMOCF 00:10:40.463 #undef SPDK_CONFIG_DAOS 00:10:40.463 #define SPDK_CONFIG_DAOS_DIR 00:10:40.463 #define SPDK_CONFIG_DEBUG 1 00:10:40.463 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:40.463 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.463 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:40.463 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.463 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:40.463 #undef SPDK_CONFIG_DPDK_UADK 00:10:40.463 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.463 #define SPDK_CONFIG_EXAMPLES 1 00:10:40.463 #undef SPDK_CONFIG_FC 00:10:40.463 #define SPDK_CONFIG_FC_PATH 00:10:40.463 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:40.463 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:40.463 #undef SPDK_CONFIG_FUSE 00:10:40.463 #undef SPDK_CONFIG_FUZZER 00:10:40.463 #define SPDK_CONFIG_FUZZER_LIB 00:10:40.463 #undef SPDK_CONFIG_GOLANG 00:10:40.463 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:40.463 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:40.463 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:40.463 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:40.463 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:40.464 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:40.464 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:40.464 #define SPDK_CONFIG_IDXD 1 00:10:40.464 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:40.464 #undef SPDK_CONFIG_IPSEC_MB 00:10:40.464 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:40.464 #define SPDK_CONFIG_ISAL 1 00:10:40.464 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:40.464 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:40.464 #define SPDK_CONFIG_LIBDIR 00:10:40.464 #undef SPDK_CONFIG_LTO 00:10:40.464 #define SPDK_CONFIG_MAX_LCORES 128 00:10:40.464 #define SPDK_CONFIG_NVME_CUSE 1 00:10:40.464 #undef SPDK_CONFIG_OCF 00:10:40.464 #define SPDK_CONFIG_OCF_PATH 00:10:40.464 #define SPDK_CONFIG_OPENSSL_PATH 00:10:40.464 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:40.464 #define SPDK_CONFIG_PGO_DIR 00:10:40.464 #undef SPDK_CONFIG_PGO_USE 00:10:40.464 #define SPDK_CONFIG_PREFIX /usr/local 00:10:40.464 #undef SPDK_CONFIG_RAID5F 00:10:40.464 #undef SPDK_CONFIG_RBD 00:10:40.464 #define SPDK_CONFIG_RDMA 1 00:10:40.464 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:40.464 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:40.464 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:40.464 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:40.464 #define SPDK_CONFIG_SHARED 1 00:10:40.464 #undef SPDK_CONFIG_SMA 00:10:40.464 #define SPDK_CONFIG_TESTS 1 00:10:40.464 #undef SPDK_CONFIG_TSAN 00:10:40.464 #define SPDK_CONFIG_UBLK 1 00:10:40.464 #define SPDK_CONFIG_UBSAN 1 00:10:40.464 #undef SPDK_CONFIG_UNIT_TESTS 00:10:40.464 #undef SPDK_CONFIG_URING 00:10:40.464 #define SPDK_CONFIG_URING_PATH 00:10:40.464 #undef SPDK_CONFIG_URING_ZNS 00:10:40.464 #undef SPDK_CONFIG_USDT 00:10:40.464 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:40.464 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:40.464 #define SPDK_CONFIG_VFIO_USER 1 00:10:40.464 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:40.464 #define SPDK_CONFIG_VHOST 1 00:10:40.464 #define SPDK_CONFIG_VIRTIO 1 00:10:40.464 #undef SPDK_CONFIG_VTUNE 00:10:40.464 #define SPDK_CONFIG_VTUNE_DIR 00:10:40.464 #define SPDK_CONFIG_WERROR 1 00:10:40.464 #define SPDK_CONFIG_WPDK_DIR 00:10:40.464 #undef SPDK_CONFIG_XNVME 00:10:40.464 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:40.464 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.465 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4006382 ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4006382 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QNAzfj 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QNAzfj/tests/target /tmp/spdk.QNAzfj 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53473083392 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8521625600 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996692992 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=663552 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:40.466 * Looking for test storage... 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53473083392 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:40.466 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10736218112 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.467 08:06:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.467 08:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.467 08:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.467 08:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.467 08:06:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:42.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:42.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:42.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:42.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.368 08:06:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:10:42.626 00:10:42.626 --- 10.0.0.2 ping statistics --- 00:10:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.626 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:10:42.626 00:10:42.626 --- 10.0.0.1 ping statistics --- 00:10:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.626 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.626 ************************************ 00:10:42.626 START TEST nvmf_filesystem_no_in_capsule 00:10:42.626 ************************************ 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4008014 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4008014 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4008014 ']' 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.626 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.627 [2024-07-21 08:06:52.212374] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:42.627 [2024-07-21 08:06:52.212452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.627 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.884 [2024-07-21 08:06:52.293057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.884 [2024-07-21 08:06:52.398514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.884 [2024-07-21 08:06:52.398584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.884 [2024-07-21 08:06:52.398608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.884 [2024-07-21 08:06:52.398655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.884 [2024-07-21 08:06:52.398676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.884 [2024-07-21 08:06:52.398738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.884 [2024-07-21 08:06:52.398800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.884 [2024-07-21 08:06:52.398867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.884 [2024-07-21 08:06:52.398876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 [2024-07-21 08:06:52.577843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 [2024-07-21 08:06:52.764274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.142 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:43.399 { 00:10:43.399 "name": "Malloc1", 00:10:43.399 "aliases": [ 00:10:43.399 "beffc606-3f2a-4efd-b16a-0e255ea896c5" 00:10:43.399 ], 00:10:43.399 "product_name": "Malloc disk", 00:10:43.399 "block_size": 512, 00:10:43.399 "num_blocks": 1048576, 00:10:43.399 "uuid": "beffc606-3f2a-4efd-b16a-0e255ea896c5", 00:10:43.399 "assigned_rate_limits": { 00:10:43.399 "rw_ios_per_sec": 0, 00:10:43.399 "rw_mbytes_per_sec": 0, 00:10:43.399 "r_mbytes_per_sec": 0, 00:10:43.399 "w_mbytes_per_sec": 0 00:10:43.399 }, 00:10:43.399 "claimed": true, 00:10:43.399 "claim_type": "exclusive_write", 00:10:43.399 "zoned": false, 00:10:43.399 "supported_io_types": { 00:10:43.399 "read": true, 00:10:43.399 "write": true, 00:10:43.399 "unmap": true, 00:10:43.399 "flush": true, 00:10:43.399 "reset": true, 00:10:43.399 "nvme_admin": false, 00:10:43.399 "nvme_io": false, 00:10:43.399 "nvme_io_md": false, 00:10:43.399 "write_zeroes": true, 00:10:43.399 "zcopy": true, 00:10:43.399 "get_zone_info": false, 00:10:43.399 "zone_management": false, 00:10:43.399 "zone_append": false, 00:10:43.399 "compare": false, 00:10:43.399 "compare_and_write": false, 00:10:43.399 "abort": true, 00:10:43.399 "seek_hole": false, 00:10:43.399 "seek_data": false, 00:10:43.399 "copy": true, 00:10:43.399 "nvme_iov_md": false 00:10:43.399 }, 00:10:43.399 "memory_domains": [ 00:10:43.399 { 00:10:43.399 "dma_device_id": "system", 00:10:43.399 "dma_device_type": 1 00:10:43.399 }, 00:10:43.399 { 00:10:43.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.399 "dma_device_type": 2 00:10:43.399 } 00:10:43.399 ], 00:10:43.399 "driver_specific": {} 00:10:43.399 } 00:10:43.399 ]' 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:43.399 08:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.962 08:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.962 08:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.962 08:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.962 08:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.962 08:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:46.483 08:06:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:47.046 08:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.418 ************************************ 00:10:48.418 START TEST filesystem_ext4 00:10:48.418 ************************************ 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:48.418 08:06:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:48.418 mke2fs 1.46.5 (30-Dec-2021) 00:10:48.418 Discarding device blocks: 0/522240 done 00:10:48.418 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:48.418 Filesystem UUID: da90628d-3912-45fd-9e22-3aef1b593318 00:10:48.418 Superblock backups stored on blocks: 00:10:48.418 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:48.418 00:10:48.418 Allocating group tables: 0/64 done 00:10:48.418 Writing inode tables: 0/64 done 00:10:49.359 Creating journal (8192 blocks): done 00:10:49.359 Writing superblocks and filesystem accounting information: 0/64 done 00:10:49.359 00:10:49.359 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4008014 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.360 00:10:49.360 real 0m1.185s 00:10:49.360 user 0m0.018s 00:10:49.360 sys 0m0.048s 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:49.360 ************************************ 00:10:49.360 END TEST filesystem_ext4 00:10:49.360 ************************************ 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.360 ************************************ 00:10:49.360 START TEST filesystem_btrfs 00:10:49.360 ************************************ 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:49.360 08:06:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:49.925 btrfs-progs v6.6.2 00:10:49.925 See https://btrfs.readthedocs.io for more information. 00:10:49.925 00:10:49.925 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:49.925 NOTE: several default settings have changed in version 5.15, please make sure 00:10:49.925 this does not affect your deployments: 00:10:49.925 - DUP for metadata (-m dup) 00:10:49.925 - enabled no-holes (-O no-holes) 00:10:49.925 - enabled free-space-tree (-R free-space-tree) 00:10:49.925 00:10:49.925 Label: (null) 00:10:49.925 UUID: 9b5273ab-359a-4d94-8a54-e19b74de8699 00:10:49.925 Node size: 16384 00:10:49.925 Sector size: 4096 00:10:49.925 Filesystem size: 510.00MiB 00:10:49.925 Block group profiles: 00:10:49.925 Data: single 8.00MiB 00:10:49.925 Metadata: DUP 32.00MiB 00:10:49.925 System: DUP 8.00MiB 00:10:49.925 SSD detected: yes 00:10:49.925 Zoned device: no 00:10:49.925 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:49.925 Runtime features: free-space-tree 00:10:49.925 Checksum: crc32c 00:10:49.925 Number of devices: 1 00:10:49.925 Devices: 00:10:49.925 ID SIZE PATH 00:10:49.925 1 510.00MiB /dev/nvme0n1p1 00:10:49.925 00:10:49.925 08:06:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:49.925 08:06:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4008014 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.861 00:10:50.861 real 0m1.342s 00:10:50.861 user 0m0.024s 00:10:50.861 sys 0m0.106s 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.861 ************************************ 00:10:50.861 END TEST filesystem_btrfs 00:10:50.861 ************************************ 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.861 ************************************ 00:10:50.861 START TEST filesystem_xfs 00:10:50.861 ************************************ 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:50.861 08:07:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:50.861 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:50.861 = sectsz=512 attr=2, projid32bit=1 00:10:50.861 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:50.861 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:50.861 data = bsize=4096 blocks=130560, imaxpct=25 00:10:50.861 = sunit=0 swidth=0 blks 00:10:50.861 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:50.861 log =internal log bsize=4096 blocks=16384, version=2 00:10:50.861 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:50.861 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:51.792 Discarding blocks...Done. 00:10:51.792 08:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:51.792 08:07:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:54.312 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4008014 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:54.568 00:10:54.568 real 0m3.654s 00:10:54.568 user 0m0.017s 00:10:54.568 sys 0m0.058s 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:54.568 ************************************ 00:10:54.568 END TEST filesystem_xfs 00:10:54.568 ************************************ 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:54.568 08:07:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4008014 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4008014 ']' 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4008014 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4008014 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4008014' 00:10:54.826 killing process with pid 4008014 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4008014 00:10:54.826 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4008014 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.413 00:10:55.413 real 0m12.640s 00:10:55.413 user 0m48.663s 00:10:55.413 sys 0m1.794s 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 ************************************ 00:10:55.413 END TEST nvmf_filesystem_no_in_capsule 00:10:55.413 ************************************ 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 ************************************ 00:10:55.413 START TEST nvmf_filesystem_in_capsule 00:10:55.413 ************************************ 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4009705 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4009705 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4009705 ']' 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.413 08:07:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.413 [2024-07-21 08:07:04.909635] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:10:55.413 [2024-07-21 08:07:04.909726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.413 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.413 [2024-07-21 08:07:04.977689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.671 [2024-07-21 08:07:05.070564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.671 [2024-07-21 08:07:05.070628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.671 [2024-07-21 08:07:05.070655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.671 [2024-07-21 08:07:05.070676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.671 [2024-07-21 08:07:05.070688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.671 [2024-07-21 08:07:05.070772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.671 [2024-07-21 08:07:05.070829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.671 [2024-07-21 08:07:05.070883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.671 [2024-07-21 08:07:05.070886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 [2024-07-21 08:07:05.239707] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.671 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.937 Malloc1 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.937 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.938 [2024-07-21 08:07:05.426161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:55.938 { 00:10:55.938 "name": "Malloc1", 00:10:55.938 "aliases": [ 00:10:55.938 "b4b63d66-0366-41b7-bd75-1e5041177116" 00:10:55.938 ], 00:10:55.938 "product_name": "Malloc disk", 00:10:55.938 "block_size": 512, 00:10:55.938 "num_blocks": 1048576, 00:10:55.938 "uuid": "b4b63d66-0366-41b7-bd75-1e5041177116", 00:10:55.938 "assigned_rate_limits": { 00:10:55.938 "rw_ios_per_sec": 0, 00:10:55.938 "rw_mbytes_per_sec": 0, 00:10:55.938 "r_mbytes_per_sec": 0, 00:10:55.938 "w_mbytes_per_sec": 0 00:10:55.938 }, 00:10:55.938 "claimed": true, 00:10:55.938 "claim_type": "exclusive_write", 00:10:55.938 "zoned": false, 00:10:55.938 "supported_io_types": { 00:10:55.938 "read": true, 00:10:55.938 "write": true, 00:10:55.938 "unmap": true, 00:10:55.938 "flush": true, 00:10:55.938 "reset": true, 00:10:55.938 "nvme_admin": false, 00:10:55.938 "nvme_io": false, 00:10:55.938 "nvme_io_md": false, 00:10:55.938 "write_zeroes": true, 00:10:55.938 "zcopy": true, 00:10:55.938 "get_zone_info": false, 00:10:55.938 "zone_management": false, 00:10:55.938 "zone_append": false, 00:10:55.938 "compare": false, 00:10:55.938 "compare_and_write": false, 00:10:55.938 "abort": true, 00:10:55.938 "seek_hole": false, 00:10:55.938 "seek_data": false, 00:10:55.938 "copy": true, 00:10:55.938 "nvme_iov_md": false 00:10:55.938 }, 00:10:55.938 "memory_domains": [ 00:10:55.938 { 00:10:55.938 "dma_device_id": "system", 00:10:55.938 "dma_device_type": 1 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.938 "dma_device_type": 2 00:10:55.938 } 00:10:55.938 ], 00:10:55.938 "driver_specific": {} 00:10:55.938 } 00:10:55.938 ]' 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:55.938 08:07:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.517 08:07:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.517 08:07:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.517 08:07:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.517 08:07:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:56.517 08:07:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:59.038 08:07:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:59.600 08:07:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.529 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:00.529 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.529 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:00.529 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.529 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.785 ************************************ 00:11:00.785 START TEST filesystem_in_capsule_ext4 00:11:00.785 ************************************ 00:11:00.785 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.785 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:00.786 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.786 mke2fs 1.46.5 (30-Dec-2021) 00:11:00.786 Discarding device blocks: 0/522240 done 00:11:00.786 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.786 Filesystem UUID: d559b57a-cb28-4975-8096-3531884db1d9 00:11:00.786 Superblock backups stored on blocks: 00:11:00.786 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.786 00:11:00.786 Allocating group tables: 0/64 done 00:11:00.786 Writing inode tables: 0/64 done 00:11:01.042 Creating journal (8192 blocks): done 00:11:01.042 Writing superblocks and filesystem accounting information: 0/64 done 00:11:01.042 00:11:01.042 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:01.042 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4009705 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.298 00:11:01.298 real 0m0.637s 00:11:01.298 user 0m0.021s 00:11:01.298 sys 0m0.055s 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:01.298 ************************************ 00:11:01.298 END TEST filesystem_in_capsule_ext4 00:11:01.298 ************************************ 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.298 ************************************ 00:11:01.298 START TEST filesystem_in_capsule_btrfs 00:11:01.298 ************************************ 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:01.298 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:01.299 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:01.299 08:07:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:01.874 btrfs-progs v6.6.2 00:11:01.874 See https://btrfs.readthedocs.io for more information. 00:11:01.874 00:11:01.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:01.874 NOTE: several default settings have changed in version 5.15, please make sure 00:11:01.874 this does not affect your deployments: 00:11:01.874 - DUP for metadata (-m dup) 00:11:01.874 - enabled no-holes (-O no-holes) 00:11:01.874 - enabled free-space-tree (-R free-space-tree) 00:11:01.874 00:11:01.874 Label: (null) 00:11:01.874 UUID: b396b183-592d-449b-8d8a-7a9745a9bb86 00:11:01.874 Node size: 16384 00:11:01.874 Sector size: 4096 00:11:01.874 Filesystem size: 510.00MiB 00:11:01.874 Block group profiles: 00:11:01.874 Data: single 8.00MiB 00:11:01.874 Metadata: DUP 32.00MiB 00:11:01.874 System: DUP 8.00MiB 00:11:01.874 SSD detected: yes 00:11:01.874 Zoned device: no 00:11:01.874 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:01.874 Runtime features: free-space-tree 00:11:01.874 Checksum: crc32c 00:11:01.874 Number of devices: 1 00:11:01.874 Devices: 00:11:01.874 ID SIZE PATH 00:11:01.874 1 510.00MiB /dev/nvme0n1p1 00:11:01.874 00:11:01.874 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:01.874 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.436 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.436 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:02.436 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.436 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:02.436 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4009705 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.437 00:11:02.437 real 0m1.000s 00:11:02.437 user 0m0.024s 00:11:02.437 sys 0m0.109s 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:02.437 ************************************ 00:11:02.437 END TEST filesystem_in_capsule_btrfs 00:11:02.437 ************************************ 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.437 ************************************ 00:11:02.437 START TEST filesystem_in_capsule_xfs 00:11:02.437 ************************************ 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:02.437 08:07:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:02.437 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:02.437 = sectsz=512 attr=2, projid32bit=1 00:11:02.437 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:02.437 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:02.437 data = bsize=4096 blocks=130560, imaxpct=25 00:11:02.437 = sunit=0 swidth=0 blks 00:11:02.437 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:02.437 log =internal log bsize=4096 blocks=16384, version=2 00:11:02.437 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:02.437 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:03.805 Discarding blocks...Done. 00:11:03.805 08:07:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:03.805 08:07:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4009705 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.329 00:11:06.329 real 0m3.636s 00:11:06.329 user 0m0.022s 00:11:06.329 sys 0m0.057s 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.329 ************************************ 00:11:06.329 END TEST filesystem_in_capsule_xfs 00:11:06.329 ************************************ 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:06.329 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4009705 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4009705 ']' 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4009705 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.587 08:07:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4009705 00:11:06.587 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:06.587 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:06.587 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4009705' 00:11:06.587 killing process with pid 4009705 00:11:06.587 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4009705 00:11:06.587 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4009705 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:06.846 00:11:06.846 real 0m11.584s 00:11:06.846 user 0m44.451s 00:11:06.846 sys 0m1.747s 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 ************************************ 00:11:06.846 END TEST nvmf_filesystem_in_capsule 00:11:06.846 ************************************ 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.846 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.846 rmmod nvme_tcp 00:11:07.104 rmmod nvme_fabrics 00:11:07.104 rmmod nvme_keyring 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.104 08:07:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.007 08:07:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.007 00:11:09.007 real 0m28.752s 00:11:09.007 user 1m34.035s 00:11:09.007 sys 0m5.135s 00:11:09.007 08:07:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.007 08:07:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.007 ************************************ 00:11:09.007 END TEST nvmf_filesystem 00:11:09.007 ************************************ 00:11:09.007 08:07:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.007 08:07:18 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.007 08:07:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.007 08:07:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.007 08:07:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.007 ************************************ 00:11:09.007 START TEST nvmf_target_discovery 00:11:09.007 ************************************ 00:11:09.007 08:07:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.266 * Looking for test storage... 00:11:09.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.266 08:07:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.161 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:11.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:11.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:11.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:11.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:11.162 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:11.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:11:11.421 00:11:11.421 --- 10.0.0.2 ping statistics --- 00:11:11.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.421 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:11.421 00:11:11.421 --- 10.0.0.1 ping statistics --- 00:11:11.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.421 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4013177 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.421 08:07:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4013177 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4013177 ']' 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.422 08:07:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.422 [2024-07-21 08:07:20.905200] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:11:11.422 [2024-07-21 08:07:20.905275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.422 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.422 [2024-07-21 08:07:20.974950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.679 [2024-07-21 08:07:21.069676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.679 [2024-07-21 08:07:21.069731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.679 [2024-07-21 08:07:21.069747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.679 [2024-07-21 08:07:21.069761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.679 [2024-07-21 08:07:21.069772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.679 [2024-07-21 08:07:21.069844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.679 [2024-07-21 08:07:21.069911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.680 [2024-07-21 08:07:21.069960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.680 [2024-07-21 08:07:21.069962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 [2024-07-21 08:07:21.228682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 Null1 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 [2024-07-21 08:07:21.268998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 Null2 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.680 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 Null3 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 Null4 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.937 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:12.195 00:11:12.195 Discovery Log Number of Records 6, Generation counter 6 00:11:12.195 =====Discovery Log Entry 0====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: current discovery subsystem 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4420 00:11:12.195 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: explicit discovery connections, duplicate discovery information 00:11:12.195 sectype: none 00:11:12.195 =====Discovery Log Entry 1====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: nvme subsystem 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4420 00:11:12.195 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: none 00:11:12.195 sectype: none 00:11:12.195 =====Discovery Log Entry 2====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: nvme subsystem 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4420 00:11:12.195 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: none 00:11:12.195 sectype: none 00:11:12.195 =====Discovery Log Entry 3====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: nvme subsystem 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4420 00:11:12.195 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: none 00:11:12.195 sectype: none 00:11:12.195 =====Discovery Log Entry 4====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: nvme subsystem 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4420 00:11:12.195 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: none 00:11:12.195 sectype: none 00:11:12.195 =====Discovery Log Entry 5====== 00:11:12.195 trtype: tcp 00:11:12.195 adrfam: ipv4 00:11:12.195 subtype: discovery subsystem referral 00:11:12.195 treq: not required 00:11:12.195 portid: 0 00:11:12.195 trsvcid: 4430 00:11:12.195 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:12.195 traddr: 10.0.0.2 00:11:12.195 eflags: none 00:11:12.195 sectype: none 00:11:12.195 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:12.195 Perform nvmf subsystem discovery via RPC 00:11:12.195 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:12.195 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.195 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.195 [ 00:11:12.195 { 00:11:12.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:12.195 "subtype": "Discovery", 00:11:12.195 "listen_addresses": [ 00:11:12.195 { 00:11:12.195 "trtype": "TCP", 00:11:12.195 "adrfam": "IPv4", 00:11:12.195 "traddr": "10.0.0.2", 00:11:12.195 "trsvcid": "4420" 00:11:12.195 } 00:11:12.195 ], 00:11:12.195 "allow_any_host": true, 00:11:12.195 "hosts": [] 00:11:12.195 }, 00:11:12.195 { 00:11:12.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.195 "subtype": "NVMe", 00:11:12.195 "listen_addresses": [ 00:11:12.195 { 00:11:12.195 "trtype": "TCP", 00:11:12.195 "adrfam": "IPv4", 00:11:12.195 "traddr": "10.0.0.2", 00:11:12.195 "trsvcid": "4420" 00:11:12.195 } 00:11:12.195 ], 00:11:12.195 "allow_any_host": true, 00:11:12.195 "hosts": [], 00:11:12.195 "serial_number": "SPDK00000000000001", 00:11:12.195 "model_number": "SPDK bdev Controller", 00:11:12.195 "max_namespaces": 32, 00:11:12.195 "min_cntlid": 1, 00:11:12.195 "max_cntlid": 65519, 00:11:12.196 "namespaces": [ 00:11:12.196 { 00:11:12.196 "nsid": 1, 00:11:12.196 "bdev_name": "Null1", 00:11:12.196 "name": "Null1", 00:11:12.196 "nguid": "EE30E5A49B90417BB8FECFA30B3A85B8", 00:11:12.196 "uuid": "ee30e5a4-9b90-417b-b8fe-cfa30b3a85b8" 00:11:12.196 } 00:11:12.196 ] 00:11:12.196 }, 00:11:12.196 { 00:11:12.196 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:12.196 "subtype": "NVMe", 00:11:12.196 "listen_addresses": [ 00:11:12.196 { 00:11:12.196 "trtype": "TCP", 00:11:12.196 "adrfam": "IPv4", 00:11:12.196 "traddr": "10.0.0.2", 00:11:12.196 "trsvcid": "4420" 00:11:12.196 } 00:11:12.196 ], 00:11:12.196 "allow_any_host": true, 00:11:12.196 "hosts": [], 00:11:12.196 "serial_number": "SPDK00000000000002", 00:11:12.196 "model_number": "SPDK bdev Controller", 00:11:12.196 "max_namespaces": 32, 00:11:12.196 "min_cntlid": 1, 00:11:12.196 "max_cntlid": 65519, 00:11:12.196 "namespaces": [ 00:11:12.196 { 00:11:12.196 "nsid": 1, 00:11:12.196 "bdev_name": "Null2", 00:11:12.196 "name": "Null2", 00:11:12.196 "nguid": "8D9E46B5F5FD4AB084A693746D76B5AE", 00:11:12.196 "uuid": "8d9e46b5-f5fd-4ab0-84a6-93746d76b5ae" 00:11:12.196 } 00:11:12.196 ] 00:11:12.196 }, 00:11:12.196 { 00:11:12.196 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:12.196 "subtype": "NVMe", 00:11:12.196 "listen_addresses": [ 00:11:12.196 { 00:11:12.196 "trtype": "TCP", 00:11:12.196 "adrfam": "IPv4", 00:11:12.196 "traddr": "10.0.0.2", 00:11:12.196 "trsvcid": "4420" 00:11:12.196 } 00:11:12.196 ], 00:11:12.196 "allow_any_host": true, 00:11:12.196 "hosts": [], 00:11:12.196 "serial_number": "SPDK00000000000003", 00:11:12.196 "model_number": "SPDK bdev Controller", 00:11:12.196 "max_namespaces": 32, 00:11:12.196 "min_cntlid": 1, 00:11:12.196 "max_cntlid": 65519, 00:11:12.196 "namespaces": [ 00:11:12.196 { 00:11:12.196 "nsid": 1, 00:11:12.196 "bdev_name": "Null3", 00:11:12.196 "name": "Null3", 00:11:12.196 "nguid": "DF98F353F88D4048B9467D1CD939ABDF", 00:11:12.196 "uuid": "df98f353-f88d-4048-b946-7d1cd939abdf" 00:11:12.196 } 00:11:12.196 ] 00:11:12.196 }, 00:11:12.196 { 00:11:12.196 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:12.196 "subtype": "NVMe", 00:11:12.196 "listen_addresses": [ 00:11:12.196 { 00:11:12.196 "trtype": "TCP", 00:11:12.196 "adrfam": "IPv4", 00:11:12.196 "traddr": "10.0.0.2", 00:11:12.196 "trsvcid": "4420" 00:11:12.196 } 00:11:12.196 ], 00:11:12.196 "allow_any_host": true, 00:11:12.196 "hosts": [], 00:11:12.196 "serial_number": "SPDK00000000000004", 00:11:12.196 "model_number": "SPDK bdev Controller", 00:11:12.196 "max_namespaces": 32, 00:11:12.196 "min_cntlid": 1, 00:11:12.196 "max_cntlid": 65519, 00:11:12.196 "namespaces": [ 00:11:12.196 { 00:11:12.196 "nsid": 1, 00:11:12.196 "bdev_name": "Null4", 00:11:12.196 "name": "Null4", 00:11:12.196 "nguid": "A7D58CF69F9448D3954C4A9BB5911967", 00:11:12.196 "uuid": "a7d58cf6-9f94-48d3-954c-4a9bb5911967" 00:11:12.196 } 00:11:12.196 ] 00:11:12.196 } 00:11:12.196 ] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.196 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.197 rmmod nvme_tcp 00:11:12.197 rmmod nvme_fabrics 00:11:12.197 rmmod nvme_keyring 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4013177 ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4013177 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4013177 ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4013177 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4013177 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4013177' 00:11:12.197 killing process with pid 4013177 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4013177 00:11:12.197 08:07:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4013177 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.455 08:07:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.989 08:07:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.989 00:11:14.989 real 0m5.488s 00:11:14.989 user 0m4.727s 00:11:14.989 sys 0m1.847s 00:11:14.989 08:07:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.989 08:07:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.989 ************************************ 00:11:14.989 END TEST nvmf_target_discovery 00:11:14.989 ************************************ 00:11:14.989 08:07:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:14.989 08:07:24 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:14.989 08:07:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.989 08:07:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.989 08:07:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.989 ************************************ 00:11:14.989 START TEST nvmf_referrals 00:11:14.989 ************************************ 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:14.989 * Looking for test storage... 00:11:14.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:14.989 08:07:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.990 08:07:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:16.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:11:16.888 00:11:16.888 --- 10.0.0.2 ping statistics --- 00:11:16.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.888 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:16.888 00:11:16.888 --- 10.0.0.1 ping statistics --- 00:11:16.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.888 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:16.888 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4015265 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4015265 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4015265 ']' 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.889 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.889 [2024-07-21 08:07:26.506287] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:11:16.889 [2024-07-21 08:07:26.506385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.145 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.146 [2024-07-21 08:07:26.574944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.146 [2024-07-21 08:07:26.664176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.146 [2024-07-21 08:07:26.664229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.146 [2024-07-21 08:07:26.664257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.146 [2024-07-21 08:07:26.664268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.146 [2024-07-21 08:07:26.664278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.146 [2024-07-21 08:07:26.664367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.146 [2024-07-21 08:07:26.664434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.146 [2024-07-21 08:07:26.664500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.146 [2024-07-21 08:07:26.664502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.402 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.402 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:11:17.402 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:17.402 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.402 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 [2024-07-21 08:07:26.826561] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 [2024-07-21 08:07:26.838841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.403 08:07:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.660 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:17.917 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.174 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:18.432 08:07:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.432 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.947 rmmod nvme_tcp 00:11:18.947 rmmod nvme_fabrics 00:11:18.947 rmmod nvme_keyring 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4015265 ']' 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4015265 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4015265 ']' 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4015265 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:11:18.947 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4015265 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4015265' 00:11:18.948 killing process with pid 4015265 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4015265 00:11:18.948 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4015265 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.205 08:07:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.729 08:07:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.729 00:11:21.729 real 0m6.619s 00:11:21.729 user 0m9.530s 00:11:21.729 sys 0m2.206s 00:11:21.729 08:07:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.729 08:07:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.729 ************************************ 00:11:21.729 END TEST nvmf_referrals 00:11:21.729 ************************************ 00:11:21.729 08:07:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:21.729 08:07:30 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:21.729 08:07:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.729 08:07:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.729 08:07:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.729 ************************************ 00:11:21.729 START TEST nvmf_connect_disconnect 00:11:21.729 ************************************ 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:21.729 * Looking for test storage... 00:11:21.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.729 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.730 08:07:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:23.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:23.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.628 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:23.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:23.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.629 08:07:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:11:23.629 00:11:23.629 --- 10.0.0.2 ping statistics --- 00:11:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.629 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:23.629 00:11:23.629 --- 10.0.0.1 ping statistics --- 00:11:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.629 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4017554 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4017554 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4017554 ']' 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.629 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:23.629 [2024-07-21 08:07:33.174737] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:11:23.629 [2024-07-21 08:07:33.174820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.629 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.629 [2024-07-21 08:07:33.240907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.886 [2024-07-21 08:07:33.335324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.886 [2024-07-21 08:07:33.335387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.886 [2024-07-21 08:07:33.335404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.886 [2024-07-21 08:07:33.335417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.886 [2024-07-21 08:07:33.335429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.886 [2024-07-21 08:07:33.335519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.886 [2024-07-21 08:07:33.335550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.886 [2024-07-21 08:07:33.335622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.886 [2024-07-21 08:07:33.335626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:23.886 [2024-07-21 08:07:33.496627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.886 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.144 [2024-07-21 08:07:33.548195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:24.144 08:07:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:26.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.672 rmmod nvme_tcp 00:15:15.672 rmmod nvme_fabrics 00:15:15.672 rmmod nvme_keyring 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4017554 ']' 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4017554 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4017554 ']' 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4017554 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4017554 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4017554' 00:15:15.672 killing process with pid 4017554 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4017554 00:15:15.672 08:11:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4017554 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.672 08:11:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.574 08:11:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.574 00:15:17.574 real 3m56.271s 00:15:17.574 user 14m58.571s 00:15:17.574 sys 0m35.861s 00:15:17.575 08:11:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.575 08:11:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:17.575 ************************************ 00:15:17.575 END TEST nvmf_connect_disconnect 00:15:17.575 ************************************ 00:15:17.575 08:11:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.575 08:11:27 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:17.575 08:11:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.575 08:11:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.575 08:11:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.575 ************************************ 00:15:17.575 START TEST nvmf_multitarget 00:15:17.575 ************************************ 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:17.575 * Looking for test storage... 00:15:17.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.575 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.833 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.834 08:11:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:19.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:19.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:19.746 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:19.746 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.746 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:15:19.747 00:15:19.747 --- 10.0.0.2 ping statistics --- 00:15:19.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.747 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:15:19.747 00:15:19.747 --- 10.0.0.1 ping statistics --- 00:15:19.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.747 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4048535 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4048535 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4048535 ']' 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.747 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.747 [2024-07-21 08:11:29.335185] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:15:19.747 [2024-07-21 08:11:29.335252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.747 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.005 [2024-07-21 08:11:29.405450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.005 [2024-07-21 08:11:29.504885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.005 [2024-07-21 08:11:29.504954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.005 [2024-07-21 08:11:29.504971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.005 [2024-07-21 08:11:29.504984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.005 [2024-07-21 08:11:29.504996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.005 [2024-07-21 08:11:29.508639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.005 [2024-07-21 08:11:29.508682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.005 [2024-07-21 08:11:29.508733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.005 [2024-07-21 08:11:29.508736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.005 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:20.263 "nvmf_tgt_1" 00:15:20.263 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:20.521 "nvmf_tgt_2" 00:15:20.521 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.521 08:11:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:20.521 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:20.521 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:20.777 true 00:15:20.777 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:20.777 true 00:15:20.777 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.777 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.036 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.036 rmmod nvme_tcp 00:15:21.036 rmmod nvme_fabrics 00:15:21.036 rmmod nvme_keyring 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4048535 ']' 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4048535 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4048535 ']' 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4048535 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4048535 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4048535' 00:15:21.037 killing process with pid 4048535 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4048535 00:15:21.037 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4048535 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.295 08:11:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.201 08:11:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:23.201 00:15:23.201 real 0m5.677s 00:15:23.201 user 0m6.526s 00:15:23.201 sys 0m1.834s 00:15:23.201 08:11:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.201 08:11:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 ************************************ 00:15:23.201 END TEST nvmf_multitarget 00:15:23.201 ************************************ 00:15:23.458 08:11:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:23.458 08:11:32 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:23.458 08:11:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:23.458 08:11:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.458 08:11:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:23.458 ************************************ 00:15:23.458 START TEST nvmf_rpc 00:15:23.458 ************************************ 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:23.458 * Looking for test storage... 00:15:23.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.458 08:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:25.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:25.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:25.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.361 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:25.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.362 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.619 08:11:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:15:25.619 00:15:25.619 --- 10.0.0.2 ping statistics --- 00:15:25.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.619 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:15:25.619 00:15:25.619 --- 10.0.0.1 ping statistics --- 00:15:25.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.619 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4050633 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4050633 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4050633 ']' 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.619 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.619 [2024-07-21 08:11:35.099400] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:15:25.619 [2024-07-21 08:11:35.099507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.619 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.619 [2024-07-21 08:11:35.168328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.878 [2024-07-21 08:11:35.262323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.878 [2024-07-21 08:11:35.262386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.878 [2024-07-21 08:11:35.262413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.878 [2024-07-21 08:11:35.262427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.878 [2024-07-21 08:11:35.262439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.878 [2024-07-21 08:11:35.262524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.878 [2024-07-21 08:11:35.262581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.878 [2024-07-21 08:11:35.262648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.878 [2024-07-21 08:11:35.262652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:25.878 "tick_rate": 2700000000, 00:15:25.878 "poll_groups": [ 00:15:25.878 { 00:15:25.878 "name": "nvmf_tgt_poll_group_000", 00:15:25.878 "admin_qpairs": 0, 00:15:25.878 "io_qpairs": 0, 00:15:25.878 "current_admin_qpairs": 0, 00:15:25.878 "current_io_qpairs": 0, 00:15:25.878 "pending_bdev_io": 0, 00:15:25.878 "completed_nvme_io": 0, 00:15:25.878 "transports": [] 00:15:25.878 }, 00:15:25.878 { 00:15:25.878 "name": "nvmf_tgt_poll_group_001", 00:15:25.878 "admin_qpairs": 0, 00:15:25.878 "io_qpairs": 0, 00:15:25.878 "current_admin_qpairs": 0, 00:15:25.878 "current_io_qpairs": 0, 00:15:25.878 "pending_bdev_io": 0, 00:15:25.878 "completed_nvme_io": 0, 00:15:25.878 "transports": [] 00:15:25.878 }, 00:15:25.878 { 00:15:25.878 "name": "nvmf_tgt_poll_group_002", 00:15:25.878 "admin_qpairs": 0, 00:15:25.878 "io_qpairs": 0, 00:15:25.878 "current_admin_qpairs": 0, 00:15:25.878 "current_io_qpairs": 0, 00:15:25.878 "pending_bdev_io": 0, 00:15:25.878 "completed_nvme_io": 0, 00:15:25.878 "transports": [] 00:15:25.878 }, 00:15:25.878 { 00:15:25.878 "name": "nvmf_tgt_poll_group_003", 00:15:25.878 "admin_qpairs": 0, 00:15:25.878 "io_qpairs": 0, 00:15:25.878 "current_admin_qpairs": 0, 00:15:25.878 "current_io_qpairs": 0, 00:15:25.878 "pending_bdev_io": 0, 00:15:25.878 "completed_nvme_io": 0, 00:15:25.878 "transports": [] 00:15:25.878 } 00:15:25.878 ] 00:15:25.878 }' 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.878 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.879 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.879 [2024-07-21 08:11:35.492684] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.879 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.879 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:25.879 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.879 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:26.137 "tick_rate": 2700000000, 00:15:26.137 "poll_groups": [ 00:15:26.137 { 00:15:26.137 "name": "nvmf_tgt_poll_group_000", 00:15:26.137 "admin_qpairs": 0, 00:15:26.137 "io_qpairs": 0, 00:15:26.137 "current_admin_qpairs": 0, 00:15:26.137 "current_io_qpairs": 0, 00:15:26.137 "pending_bdev_io": 0, 00:15:26.137 "completed_nvme_io": 0, 00:15:26.137 "transports": [ 00:15:26.137 { 00:15:26.137 "trtype": "TCP" 00:15:26.137 } 00:15:26.137 ] 00:15:26.137 }, 00:15:26.137 { 00:15:26.137 "name": "nvmf_tgt_poll_group_001", 00:15:26.137 "admin_qpairs": 0, 00:15:26.137 "io_qpairs": 0, 00:15:26.137 "current_admin_qpairs": 0, 00:15:26.137 "current_io_qpairs": 0, 00:15:26.137 "pending_bdev_io": 0, 00:15:26.137 "completed_nvme_io": 0, 00:15:26.137 "transports": [ 00:15:26.137 { 00:15:26.137 "trtype": "TCP" 00:15:26.137 } 00:15:26.137 ] 00:15:26.137 }, 00:15:26.137 { 00:15:26.137 "name": "nvmf_tgt_poll_group_002", 00:15:26.137 "admin_qpairs": 0, 00:15:26.137 "io_qpairs": 0, 00:15:26.137 "current_admin_qpairs": 0, 00:15:26.137 "current_io_qpairs": 0, 00:15:26.137 "pending_bdev_io": 0, 00:15:26.137 "completed_nvme_io": 0, 00:15:26.137 "transports": [ 00:15:26.137 { 00:15:26.137 "trtype": "TCP" 00:15:26.137 } 00:15:26.137 ] 00:15:26.137 }, 00:15:26.137 { 00:15:26.137 "name": "nvmf_tgt_poll_group_003", 00:15:26.137 "admin_qpairs": 0, 00:15:26.137 "io_qpairs": 0, 00:15:26.137 "current_admin_qpairs": 0, 00:15:26.137 "current_io_qpairs": 0, 00:15:26.137 "pending_bdev_io": 0, 00:15:26.137 "completed_nvme_io": 0, 00:15:26.137 "transports": [ 00:15:26.137 { 00:15:26.137 "trtype": "TCP" 00:15:26.137 } 00:15:26.137 ] 00:15:26.137 } 00:15:26.137 ] 00:15:26.137 }' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 Malloc1 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 [2024-07-21 08:11:35.631835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:26.137 [2024-07-21 08:11:35.654303] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:26.137 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:26.137 could not add new controller: failed to write to nvme-fabrics device 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.137 08:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.706 08:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.706 08:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.706 08:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.706 08:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:26.706 08:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.268 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.269 [2024-07-21 08:11:38.424365] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:29.269 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:29.269 could not add new controller: failed to write to nvme-fabrics device 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.269 08:11:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.529 08:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.529 08:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.529 08:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.529 08:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:29.529 08:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.061 [2024-07-21 08:11:41.298469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.061 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.627 08:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.627 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:32.627 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.627 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:32.627 08:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:34.530 08:11:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:34.530 08:11:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:34.530 08:11:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 [2024-07-21 08:11:44.109859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.530 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.467 08:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.467 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:35.467 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.467 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:35.467 08:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:37.366 08:11:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.367 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.625 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.625 08:11:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.625 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.625 08:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.625 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.625 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.626 [2024-07-21 08:11:47.019231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.626 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.192 08:11:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.192 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.192 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.192 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:38.192 08:11:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.092 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.092 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.092 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 [2024-07-21 08:11:49.825067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.352 08:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.918 08:11:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.918 08:11:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.918 08:11:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.918 08:11:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:40.918 08:11:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:42.820 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:42.820 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:42.820 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 [2024-07-21 08:11:52.551343] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.079 08:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.646 08:11:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.646 08:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:43.646 08:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.646 08:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:43.646 08:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:45.545 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.802 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 [2024-07-21 08:11:55.314125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 [2024-07-21 08:11:55.362150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 [2024-07-21 08:11:55.410289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.803 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 [2024-07-21 08:11:55.458457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 [2024-07-21 08:11:55.506650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.060 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:46.061 "tick_rate": 2700000000, 00:15:46.061 "poll_groups": [ 00:15:46.061 { 00:15:46.061 "name": "nvmf_tgt_poll_group_000", 00:15:46.061 "admin_qpairs": 2, 00:15:46.061 "io_qpairs": 84, 00:15:46.061 "current_admin_qpairs": 0, 00:15:46.061 "current_io_qpairs": 0, 00:15:46.061 "pending_bdev_io": 0, 00:15:46.061 "completed_nvme_io": 151, 00:15:46.061 "transports": [ 00:15:46.061 { 00:15:46.061 "trtype": "TCP" 00:15:46.061 } 00:15:46.061 ] 00:15:46.061 }, 00:15:46.061 { 00:15:46.061 "name": "nvmf_tgt_poll_group_001", 00:15:46.061 "admin_qpairs": 2, 00:15:46.061 "io_qpairs": 84, 00:15:46.061 "current_admin_qpairs": 0, 00:15:46.061 "current_io_qpairs": 0, 00:15:46.061 "pending_bdev_io": 0, 00:15:46.061 "completed_nvme_io": 252, 00:15:46.061 "transports": [ 00:15:46.061 { 00:15:46.061 "trtype": "TCP" 00:15:46.061 } 00:15:46.061 ] 00:15:46.061 }, 00:15:46.061 { 00:15:46.061 "name": "nvmf_tgt_poll_group_002", 00:15:46.061 "admin_qpairs": 1, 00:15:46.061 "io_qpairs": 84, 00:15:46.061 "current_admin_qpairs": 0, 00:15:46.061 "current_io_qpairs": 0, 00:15:46.061 "pending_bdev_io": 0, 00:15:46.061 "completed_nvme_io": 164, 00:15:46.061 "transports": [ 00:15:46.061 { 00:15:46.061 "trtype": "TCP" 00:15:46.061 } 00:15:46.061 ] 00:15:46.061 }, 00:15:46.061 { 00:15:46.061 "name": "nvmf_tgt_poll_group_003", 00:15:46.061 "admin_qpairs": 2, 00:15:46.061 "io_qpairs": 84, 00:15:46.061 "current_admin_qpairs": 0, 00:15:46.061 "current_io_qpairs": 0, 00:15:46.061 "pending_bdev_io": 0, 00:15:46.061 "completed_nvme_io": 119, 00:15:46.061 "transports": [ 00:15:46.061 { 00:15:46.061 "trtype": "TCP" 00:15:46.061 } 00:15:46.061 ] 00:15:46.061 } 00:15:46.061 ] 00:15:46.061 }' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.061 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.061 rmmod nvme_tcp 00:15:46.061 rmmod nvme_fabrics 00:15:46.061 rmmod nvme_keyring 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4050633 ']' 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4050633 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4050633 ']' 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4050633 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4050633 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4050633' 00:15:46.319 killing process with pid 4050633 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4050633 00:15:46.319 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4050633 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.578 08:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.520 08:11:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.520 00:15:48.520 real 0m25.148s 00:15:48.520 user 1m21.945s 00:15:48.520 sys 0m4.056s 00:15:48.520 08:11:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.520 08:11:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 ************************************ 00:15:48.520 END TEST nvmf_rpc 00:15:48.520 ************************************ 00:15:48.520 08:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:48.520 08:11:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:48.520 08:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:48.520 08:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.520 08:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 ************************************ 00:15:48.520 START TEST nvmf_invalid 00:15:48.520 ************************************ 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:48.520 * Looking for test storage... 00:15:48.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.520 08:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.521 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:48.521 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:48.521 08:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.521 08:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.052 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:51.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:51.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:51.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:51.053 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:51.053 00:15:51.053 --- 10.0.0.2 ping statistics --- 00:15:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.053 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:15:51.053 00:15:51.053 --- 10.0.0.1 ping statistics --- 00:15:51.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.053 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4055154 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4055154 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4055154 ']' 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:51.053 [2024-07-21 08:12:00.348715] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:15:51.053 [2024-07-21 08:12:00.348796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.053 [2024-07-21 08:12:00.414517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.053 [2024-07-21 08:12:00.512718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.053 [2024-07-21 08:12:00.512769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.053 [2024-07-21 08:12:00.512784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.053 [2024-07-21 08:12:00.512798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.053 [2024-07-21 08:12:00.512808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.053 [2024-07-21 08:12:00.512854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.053 [2024-07-21 08:12:00.512899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.053 [2024-07-21 08:12:00.512932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.053 [2024-07-21 08:12:00.512935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.053 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:51.054 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15607 00:15:51.310 [2024-07-21 08:12:00.894876] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:51.310 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:51.310 { 00:15:51.310 "nqn": "nqn.2016-06.io.spdk:cnode15607", 00:15:51.310 "tgt_name": "foobar", 00:15:51.310 "method": "nvmf_create_subsystem", 00:15:51.310 "req_id": 1 00:15:51.310 } 00:15:51.310 Got JSON-RPC error response 00:15:51.310 response: 00:15:51.311 { 00:15:51.311 "code": -32603, 00:15:51.311 "message": "Unable to find target foobar" 00:15:51.311 }' 00:15:51.311 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:51.311 { 00:15:51.311 "nqn": "nqn.2016-06.io.spdk:cnode15607", 00:15:51.311 "tgt_name": "foobar", 00:15:51.311 "method": "nvmf_create_subsystem", 00:15:51.311 "req_id": 1 00:15:51.311 } 00:15:51.311 Got JSON-RPC error response 00:15:51.311 response: 00:15:51.311 { 00:15:51.311 "code": -32603, 00:15:51.311 "message": "Unable to find target foobar" 00:15:51.311 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:51.311 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:51.311 08:12:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22714 00:15:51.568 [2024-07-21 08:12:01.159812] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22714: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:51.568 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:51.568 { 00:15:51.568 "nqn": "nqn.2016-06.io.spdk:cnode22714", 00:15:51.568 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:51.568 "method": "nvmf_create_subsystem", 00:15:51.568 "req_id": 1 00:15:51.568 } 00:15:51.568 Got JSON-RPC error response 00:15:51.568 response: 00:15:51.568 { 00:15:51.568 "code": -32602, 00:15:51.568 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:51.568 }' 00:15:51.568 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:51.568 { 00:15:51.568 "nqn": "nqn.2016-06.io.spdk:cnode22714", 00:15:51.568 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:51.568 "method": "nvmf_create_subsystem", 00:15:51.568 "req_id": 1 00:15:51.568 } 00:15:51.568 Got JSON-RPC error response 00:15:51.568 response: 00:15:51.568 { 00:15:51.568 "code": -32602, 00:15:51.568 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:51.568 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:51.568 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:51.568 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13173 00:15:51.824 [2024-07-21 08:12:01.424691] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13173: invalid model number 'SPDK_Controller' 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:51.824 { 00:15:51.824 "nqn": "nqn.2016-06.io.spdk:cnode13173", 00:15:51.824 "model_number": "SPDK_Controller\u001f", 00:15:51.824 "method": "nvmf_create_subsystem", 00:15:51.824 "req_id": 1 00:15:51.824 } 00:15:51.824 Got JSON-RPC error response 00:15:51.824 response: 00:15:51.824 { 00:15:51.824 "code": -32602, 00:15:51.824 "message": "Invalid MN SPDK_Controller\u001f" 00:15:51.824 }' 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:51.824 { 00:15:51.824 "nqn": "nqn.2016-06.io.spdk:cnode13173", 00:15:51.824 "model_number": "SPDK_Controller\u001f", 00:15:51.824 "method": "nvmf_create_subsystem", 00:15:51.824 "req_id": 1 00:15:51.824 } 00:15:51.824 Got JSON-RPC error response 00:15:51.824 response: 00:15:51.824 { 00:15:51.824 "code": -32602, 00:15:51.824 "message": "Invalid MN SPDK_Controller\u001f" 00:15:51.824 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:51.824 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hhpoL{p#&}9iT{H&AC\6!' 00:15:52.082 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hhpoL{p#&}9iT{H&AC\6!' nqn.2016-06.io.spdk:cnode26505 00:15:52.341 [2024-07-21 08:12:01.753824] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26505: invalid serial number 'hhpoL{p#&}9iT{H&AC\6!' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:52.341 { 00:15:52.341 "nqn": "nqn.2016-06.io.spdk:cnode26505", 00:15:52.341 "serial_number": "hhpoL{p#&}9iT{H&AC\\6!", 00:15:52.341 "method": "nvmf_create_subsystem", 00:15:52.341 "req_id": 1 00:15:52.341 } 00:15:52.341 Got JSON-RPC error response 00:15:52.341 response: 00:15:52.341 { 00:15:52.341 "code": -32602, 00:15:52.341 "message": "Invalid SN hhpoL{p#&}9iT{H&AC\\6!" 00:15:52.341 }' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:52.341 { 00:15:52.341 "nqn": "nqn.2016-06.io.spdk:cnode26505", 00:15:52.341 "serial_number": "hhpoL{p#&}9iT{H&AC\\6!", 00:15:52.341 "method": "nvmf_create_subsystem", 00:15:52.341 "req_id": 1 00:15:52.341 } 00:15:52.341 Got JSON-RPC error response 00:15:52.341 response: 00:15:52.341 { 00:15:52.341 "code": -32602, 00:15:52.341 "message": "Invalid SN hhpoL{p#&}9iT{H&AC\\6!" 00:15:52.341 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:52.341 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.342 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:15:52.343 08:12:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r5u( /dev/null' 00:15:55.175 08:12:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.079 08:12:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.079 00:15:57.079 real 0m8.635s 00:15:57.079 user 0m20.151s 00:15:57.079 sys 0m2.439s 00:15:57.079 08:12:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.079 08:12:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:57.079 ************************************ 00:15:57.079 END TEST nvmf_invalid 00:15:57.079 ************************************ 00:15:57.335 08:12:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:57.335 08:12:06 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:57.335 08:12:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.335 08:12:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.335 08:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.335 ************************************ 00:15:57.335 START TEST nvmf_abort 00:15:57.335 ************************************ 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:57.335 * Looking for test storage... 00:15:57.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.335 08:12:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.336 08:12:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:59.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:59.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:59.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:59.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.233 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:15:59.491 00:15:59.491 --- 10.0.0.2 ping statistics --- 00:15:59.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.491 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:15:59.491 00:15:59.491 --- 10.0.0.1 ping statistics --- 00:15:59.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.491 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4058331 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4058331 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4058331 ']' 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.491 08:12:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.491 [2024-07-21 08:12:08.958371] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:15:59.491 [2024-07-21 08:12:08.958460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.491 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.491 [2024-07-21 08:12:09.026410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:59.491 [2024-07-21 08:12:09.118596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.491 [2024-07-21 08:12:09.118679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.491 [2024-07-21 08:12:09.118696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.491 [2024-07-21 08:12:09.118709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.491 [2024-07-21 08:12:09.118720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.491 [2024-07-21 08:12:09.118792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.491 [2024-07-21 08:12:09.118910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.491 [2024-07-21 08:12:09.118912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 [2024-07-21 08:12:09.250157] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 Malloc0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 Delay0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 [2024-07-21 08:12:09.327558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.749 08:12:09 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:59.749 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.007 [2024-07-21 08:12:09.474726] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:02.538 Initializing NVMe Controllers 00:16:02.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:02.538 controller IO queue size 128 less than required 00:16:02.538 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:02.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:02.538 Initialization complete. Launching workers. 00:16:02.538 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34031 00:16:02.538 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34092, failed to submit 62 00:16:02.538 success 34035, unsuccess 57, failed 0 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.538 rmmod nvme_tcp 00:16:02.538 rmmod nvme_fabrics 00:16:02.538 rmmod nvme_keyring 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4058331 ']' 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4058331 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4058331 ']' 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4058331 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4058331 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.538 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4058331' 00:16:02.539 killing process with pid 4058331 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4058331 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4058331 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.539 08:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.443 08:12:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.443 00:16:04.443 real 0m7.256s 00:16:04.443 user 0m10.797s 00:16:04.443 sys 0m2.477s 00:16:04.443 08:12:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.443 08:12:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:04.443 ************************************ 00:16:04.443 END TEST nvmf_abort 00:16:04.443 ************************************ 00:16:04.443 08:12:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.443 08:12:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:04.443 08:12:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.443 08:12:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.443 08:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.443 ************************************ 00:16:04.443 START TEST nvmf_ns_hotplug_stress 00:16:04.443 ************************************ 00:16:04.443 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:04.702 * Looking for test storage... 00:16:04.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.702 08:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:06.625 00:16:06.625 --- 10.0.0.2 ping statistics --- 00:16:06.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.625 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:06.625 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:16:06.883 00:16:06.883 --- 10.0.0.1 ping statistics --- 00:16:06.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.883 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4060623 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4060623 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4060623 ']' 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.883 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.883 [2024-07-21 08:12:16.327082] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:16:06.883 [2024-07-21 08:12:16.327166] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.883 [2024-07-21 08:12:16.398666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.883 [2024-07-21 08:12:16.491000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.883 [2024-07-21 08:12:16.491061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.883 [2024-07-21 08:12:16.491086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.883 [2024-07-21 08:12:16.491100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.883 [2024-07-21 08:12:16.491111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.883 [2024-07-21 08:12:16.491224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.883 [2024-07-21 08:12:16.491323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.883 [2024-07-21 08:12:16.491325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:07.140 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:07.396 [2024-07-21 08:12:16.851009] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.396 08:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.653 08:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.911 [2024-07-21 08:12:17.442880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.911 08:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:08.167 08:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:08.424 Malloc0 00:16:08.424 08:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:08.681 Delay0 00:16:08.681 08:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.938 08:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:09.196 NULL1 00:16:09.196 08:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:09.453 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4061016 00:16:09.453 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:09.453 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:09.453 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.453 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.711 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.968 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:09.968 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:10.226 true 00:16:10.226 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:10.226 08:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.483 08:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.741 08:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:10.741 08:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:10.999 true 00:16:10.999 08:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:10.999 08:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.951 Read completed with error (sct=0, sc=11) 00:16:11.951 08:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.208 08:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:12.208 08:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:12.464 true 00:16:12.464 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:12.464 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.721 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.978 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:12.978 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:13.236 true 00:16:13.236 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:13.236 08:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.170 08:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.428 08:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:14.428 08:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:14.685 true 00:16:14.685 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:14.685 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.942 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.201 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:15.202 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:15.202 true 00:16:15.461 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:15.461 08:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.395 08:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.395 08:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:16.395 08:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:16.653 true 00:16:16.653 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:16.653 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.911 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.169 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:17.169 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:17.428 true 00:16:17.428 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:17.428 08:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.365 08:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.623 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:18.623 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:18.879 true 00:16:18.879 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:18.879 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.136 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.394 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:19.394 08:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:19.651 true 00:16:19.651 08:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:19.651 08:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.600 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.864 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:20.864 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:21.120 true 00:16:21.120 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:21.120 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.377 08:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.635 08:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:21.635 08:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:21.892 true 00:16:21.892 08:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:21.892 08:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.826 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.083 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:23.083 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:23.342 true 00:16:23.342 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:23.342 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.601 08:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.859 08:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:23.859 08:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:23.859 true 00:16:23.859 08:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:23.859 08:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.796 08:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.053 08:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:25.053 08:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:25.311 true 00:16:25.311 08:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:25.311 08:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.569 08:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.827 08:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:25.827 08:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:26.085 true 00:16:26.085 08:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:26.085 08:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.020 08:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.277 08:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:27.277 08:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:27.535 true 00:16:27.535 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:27.535 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.793 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.050 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:28.050 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:28.307 true 00:16:28.307 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:28.307 08:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.240 08:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.497 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:29.497 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:29.753 true 00:16:29.753 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:29.753 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.010 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.265 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:30.265 08:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:30.521 true 00:16:30.521 08:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:30.521 08:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.450 08:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.706 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:31.706 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:31.962 true 00:16:31.962 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:31.962 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.219 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.475 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:32.475 08:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:32.732 true 00:16:32.732 08:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:32.732 08:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.659 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.916 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:33.916 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:33.916 true 00:16:33.916 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:33.916 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.172 08:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.428 08:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:34.428 08:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:34.683 true 00:16:34.683 08:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:34.683 08:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.630 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.886 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:35.886 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:36.142 true 00:16:36.142 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:36.142 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.398 08:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.655 08:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:36.655 08:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:36.912 true 00:16:36.912 08:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:36.912 08:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.845 08:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.102 08:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:38.102 08:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:38.360 true 00:16:38.360 08:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:38.360 08:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.618 08:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.875 08:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:38.875 08:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:39.133 true 00:16:39.133 08:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:39.133 08:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.066 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.066 Initializing NVMe Controllers 00:16:40.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.066 Controller IO queue size 128, less than required. 00:16:40.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:40.066 Controller IO queue size 128, less than required. 00:16:40.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:40.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:40.066 Initialization complete. Launching workers. 00:16:40.066 ======================================================== 00:16:40.066 Latency(us) 00:16:40.066 Device Information : IOPS MiB/s Average min max 00:16:40.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 878.38 0.43 76767.99 2973.37 1032840.05 00:16:40.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11100.41 5.42 11498.59 3786.74 451154.02 00:16:40.066 ======================================================== 00:16:40.066 Total : 11978.79 5.85 16284.68 2973.37 1032840.05 00:16:40.066 00:16:40.066 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:40.066 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:40.324 true 00:16:40.324 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4061016 00:16:40.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4061016) - No such process 00:16:40.324 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4061016 00:16:40.324 08:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.582 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:41.147 null0 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:41.147 08:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:41.403 null1 00:16:41.403 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:41.403 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:41.403 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:41.660 null2 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:41.917 null3 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:41.917 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:42.175 null4 00:16:42.175 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.175 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.175 08:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:42.432 null5 00:16:42.432 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.432 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.432 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:42.689 null6 00:16:42.689 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.689 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.689 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:42.947 null7 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4065057 4065058 4065060 4065062 4065065 4065067 4065069 4065071 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:42.947 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:43.204 08:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.461 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:43.718 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:43.718 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.718 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:43.718 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:43.718 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:43.976 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.233 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.490 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:44.491 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.491 08:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.748 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:45.005 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.262 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:45.519 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.519 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:45.519 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:45.519 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:45.519 08:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:45.519 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.519 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:45.519 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.776 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.033 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.291 08:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.548 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.548 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.549 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.806 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.064 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.322 08:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.581 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.838 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.839 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.839 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.839 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.839 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.839 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:48.096 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.096 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:48.096 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:48.096 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:48.353 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:48.353 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.353 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:48.353 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.609 rmmod nvme_tcp 00:16:48.609 rmmod nvme_fabrics 00:16:48.609 rmmod nvme_keyring 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4060623 ']' 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4060623 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4060623 ']' 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4060623 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060623 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060623' 00:16:48.609 killing process with pid 4060623 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4060623 00:16:48.609 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4060623 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.867 08:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.846 08:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.846 00:16:50.846 real 0m46.340s 00:16:50.846 user 3m31.660s 00:16:50.846 sys 0m16.032s 00:16:50.846 08:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.846 08:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.846 ************************************ 00:16:50.846 END TEST nvmf_ns_hotplug_stress 00:16:50.846 ************************************ 00:16:50.846 08:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.846 08:13:00 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:50.846 08:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.846 08:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.846 08:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.846 ************************************ 00:16:50.846 START TEST nvmf_connect_stress 00:16:50.846 ************************************ 00:16:50.846 08:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:51.105 * Looking for test storage... 00:16:51.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.105 08:13:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.105 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:51.105 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.105 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.105 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.110 08:13:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.111 08:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.016 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:16:53.017 00:16:53.017 --- 10.0.0.2 ping statistics --- 00:16:53.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.017 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:16:53.017 00:16:53.017 --- 10.0.0.1 ping statistics --- 00:16:53.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.017 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4067835 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4067835 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4067835 ']' 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.017 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.275 [2024-07-21 08:13:02.650996] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:16:53.275 [2024-07-21 08:13:02.651077] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.275 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.275 [2024-07-21 08:13:02.716210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.275 [2024-07-21 08:13:02.807113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.275 [2024-07-21 08:13:02.807177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.275 [2024-07-21 08:13:02.807205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.275 [2024-07-21 08:13:02.807218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.275 [2024-07-21 08:13:02.807231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.275 [2024-07-21 08:13:02.807319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.275 [2024-07-21 08:13:02.807431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.275 [2024-07-21 08:13:02.807433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 [2024-07-21 08:13:02.936852] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 [2024-07-21 08:13:02.963054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.534 NULL1 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4067858 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.534 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.535 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.791 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.791 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:53.791 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.791 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.791 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.047 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.047 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:54.047 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.047 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.047 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.610 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.610 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:54.610 08:13:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.610 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.610 08:13:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.867 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.867 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:54.867 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.867 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.867 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.124 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.124 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:55.124 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.124 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.124 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.381 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.381 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:55.381 08:13:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.381 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.381 08:13:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.947 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:55.947 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.947 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.947 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.204 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.204 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:56.204 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.204 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.204 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.460 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.460 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:56.460 08:13:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.460 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.460 08:13:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.716 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.716 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:56.716 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.716 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.716 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.973 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.973 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:56.973 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.973 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.973 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.536 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.536 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:57.536 08:13:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.536 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.536 08:13:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.793 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.793 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:57.793 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.793 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.793 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.049 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.049 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:58.049 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.049 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.049 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.311 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.311 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:58.311 08:13:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.311 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.311 08:13:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.568 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.568 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:58.568 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.568 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.568 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.131 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:59.131 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.131 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.131 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.388 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.388 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:59.388 08:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.388 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.388 08:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.645 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.645 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:59.645 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.645 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.645 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.901 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.901 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:16:59.901 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.901 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.901 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.158 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.158 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:00.158 08:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.158 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.158 08:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.720 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.720 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:00.720 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.720 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.720 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.977 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.977 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:00.977 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.977 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.977 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.233 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.233 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:01.233 08:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.233 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.233 08:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.490 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.490 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:01.490 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.490 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.490 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.746 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.747 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:01.747 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.747 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.747 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.310 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.310 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:02.310 08:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.310 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.310 08:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.567 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.567 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:02.567 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.567 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.567 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.824 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.824 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:02.824 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.824 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.824 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.081 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.081 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:03.081 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.081 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.081 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.644 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:03.644 08:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.644 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.644 08:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4067858 00:17:03.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4067858) - No such process 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4067858 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.901 rmmod nvme_tcp 00:17:03.901 rmmod nvme_fabrics 00:17:03.901 rmmod nvme_keyring 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4067835 ']' 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4067835 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4067835 ']' 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4067835 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4067835 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4067835' 00:17:03.901 killing process with pid 4067835 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4067835 00:17:03.901 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4067835 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.157 08:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.079 08:13:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.079 00:17:06.079 real 0m15.218s 00:17:06.079 user 0m38.123s 00:17:06.079 sys 0m5.960s 00:17:06.080 08:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.080 08:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 ************************************ 00:17:06.080 END TEST nvmf_connect_stress 00:17:06.080 ************************************ 00:17:06.080 08:13:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.080 08:13:15 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:06.080 08:13:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.080 08:13:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.080 08:13:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 ************************************ 00:17:06.080 START TEST nvmf_fused_ordering 00:17:06.080 ************************************ 00:17:06.080 08:13:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:06.337 * Looking for test storage... 00:17:06.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:06.337 08:13:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.338 08:13:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:17:08.235 00:17:08.235 --- 10.0.0.2 ping statistics --- 00:17:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.235 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:17:08.235 00:17:08.235 --- 10.0.0.1 ping statistics --- 00:17:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.235 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:08.235 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4071004 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4071004 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4071004 ']' 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.236 08:13:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 [2024-07-21 08:13:17.880297] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:08.493 [2024-07-21 08:13:17.880387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.494 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.494 [2024-07-21 08:13:17.950651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.494 [2024-07-21 08:13:18.045270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.494 [2024-07-21 08:13:18.045328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.494 [2024-07-21 08:13:18.045344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.494 [2024-07-21 08:13:18.045358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.494 [2024-07-21 08:13:18.045370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.494 [2024-07-21 08:13:18.045398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 [2024-07-21 08:13:18.181267] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 [2024-07-21 08:13:18.197444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.751 NULL1 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.751 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.752 08:13:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:08.752 [2024-07-21 08:13:18.241480] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:08.752 [2024-07-21 08:13:18.241522] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071143 ] 00:17:08.752 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.315 Attached to nqn.2016-06.io.spdk:cnode1 00:17:09.315 Namespace ID: 1 size: 1GB 00:17:09.315 fused_ordering(0) 00:17:09.315 fused_ordering(1) 00:17:09.315 fused_ordering(2) 00:17:09.315 fused_ordering(3) 00:17:09.315 fused_ordering(4) 00:17:09.315 fused_ordering(5) 00:17:09.315 fused_ordering(6) 00:17:09.315 fused_ordering(7) 00:17:09.315 fused_ordering(8) 00:17:09.315 fused_ordering(9) 00:17:09.315 fused_ordering(10) 00:17:09.315 fused_ordering(11) 00:17:09.315 fused_ordering(12) 00:17:09.315 fused_ordering(13) 00:17:09.315 fused_ordering(14) 00:17:09.315 fused_ordering(15) 00:17:09.315 fused_ordering(16) 00:17:09.315 fused_ordering(17) 00:17:09.315 fused_ordering(18) 00:17:09.315 fused_ordering(19) 00:17:09.315 fused_ordering(20) 00:17:09.315 fused_ordering(21) 00:17:09.315 fused_ordering(22) 00:17:09.315 fused_ordering(23) 00:17:09.315 fused_ordering(24) 00:17:09.315 fused_ordering(25) 00:17:09.315 fused_ordering(26) 00:17:09.315 fused_ordering(27) 00:17:09.315 fused_ordering(28) 00:17:09.315 fused_ordering(29) 00:17:09.315 fused_ordering(30) 00:17:09.315 fused_ordering(31) 00:17:09.315 fused_ordering(32) 00:17:09.315 fused_ordering(33) 00:17:09.315 fused_ordering(34) 00:17:09.315 fused_ordering(35) 00:17:09.315 fused_ordering(36) 00:17:09.315 fused_ordering(37) 00:17:09.315 fused_ordering(38) 00:17:09.315 fused_ordering(39) 00:17:09.315 fused_ordering(40) 00:17:09.315 fused_ordering(41) 00:17:09.315 fused_ordering(42) 00:17:09.315 fused_ordering(43) 00:17:09.315 fused_ordering(44) 00:17:09.315 fused_ordering(45) 00:17:09.315 fused_ordering(46) 00:17:09.315 fused_ordering(47) 00:17:09.315 fused_ordering(48) 00:17:09.315 fused_ordering(49) 00:17:09.315 fused_ordering(50) 00:17:09.315 fused_ordering(51) 00:17:09.315 fused_ordering(52) 00:17:09.315 fused_ordering(53) 00:17:09.315 fused_ordering(54) 00:17:09.315 fused_ordering(55) 00:17:09.315 fused_ordering(56) 00:17:09.315 fused_ordering(57) 00:17:09.315 fused_ordering(58) 00:17:09.315 fused_ordering(59) 00:17:09.315 fused_ordering(60) 00:17:09.315 fused_ordering(61) 00:17:09.315 fused_ordering(62) 00:17:09.315 fused_ordering(63) 00:17:09.315 fused_ordering(64) 00:17:09.315 fused_ordering(65) 00:17:09.315 fused_ordering(66) 00:17:09.315 fused_ordering(67) 00:17:09.315 fused_ordering(68) 00:17:09.315 fused_ordering(69) 00:17:09.315 fused_ordering(70) 00:17:09.315 fused_ordering(71) 00:17:09.315 fused_ordering(72) 00:17:09.315 fused_ordering(73) 00:17:09.315 fused_ordering(74) 00:17:09.315 fused_ordering(75) 00:17:09.315 fused_ordering(76) 00:17:09.315 fused_ordering(77) 00:17:09.315 fused_ordering(78) 00:17:09.315 fused_ordering(79) 00:17:09.315 fused_ordering(80) 00:17:09.315 fused_ordering(81) 00:17:09.315 fused_ordering(82) 00:17:09.315 fused_ordering(83) 00:17:09.315 fused_ordering(84) 00:17:09.315 fused_ordering(85) 00:17:09.315 fused_ordering(86) 00:17:09.315 fused_ordering(87) 00:17:09.315 fused_ordering(88) 00:17:09.315 fused_ordering(89) 00:17:09.315 fused_ordering(90) 00:17:09.315 fused_ordering(91) 00:17:09.315 fused_ordering(92) 00:17:09.315 fused_ordering(93) 00:17:09.315 fused_ordering(94) 00:17:09.315 fused_ordering(95) 00:17:09.315 fused_ordering(96) 00:17:09.315 fused_ordering(97) 00:17:09.315 fused_ordering(98) 00:17:09.315 fused_ordering(99) 00:17:09.315 fused_ordering(100) 00:17:09.315 fused_ordering(101) 00:17:09.315 fused_ordering(102) 00:17:09.315 fused_ordering(103) 00:17:09.315 fused_ordering(104) 00:17:09.315 fused_ordering(105) 00:17:09.315 fused_ordering(106) 00:17:09.315 fused_ordering(107) 00:17:09.315 fused_ordering(108) 00:17:09.315 fused_ordering(109) 00:17:09.315 fused_ordering(110) 00:17:09.315 fused_ordering(111) 00:17:09.315 fused_ordering(112) 00:17:09.315 fused_ordering(113) 00:17:09.315 fused_ordering(114) 00:17:09.315 fused_ordering(115) 00:17:09.315 fused_ordering(116) 00:17:09.315 fused_ordering(117) 00:17:09.315 fused_ordering(118) 00:17:09.315 fused_ordering(119) 00:17:09.315 fused_ordering(120) 00:17:09.315 fused_ordering(121) 00:17:09.315 fused_ordering(122) 00:17:09.315 fused_ordering(123) 00:17:09.315 fused_ordering(124) 00:17:09.315 fused_ordering(125) 00:17:09.315 fused_ordering(126) 00:17:09.315 fused_ordering(127) 00:17:09.315 fused_ordering(128) 00:17:09.315 fused_ordering(129) 00:17:09.315 fused_ordering(130) 00:17:09.315 fused_ordering(131) 00:17:09.315 fused_ordering(132) 00:17:09.315 fused_ordering(133) 00:17:09.315 fused_ordering(134) 00:17:09.315 fused_ordering(135) 00:17:09.315 fused_ordering(136) 00:17:09.315 fused_ordering(137) 00:17:09.315 fused_ordering(138) 00:17:09.315 fused_ordering(139) 00:17:09.315 fused_ordering(140) 00:17:09.315 fused_ordering(141) 00:17:09.315 fused_ordering(142) 00:17:09.315 fused_ordering(143) 00:17:09.315 fused_ordering(144) 00:17:09.315 fused_ordering(145) 00:17:09.315 fused_ordering(146) 00:17:09.315 fused_ordering(147) 00:17:09.315 fused_ordering(148) 00:17:09.315 fused_ordering(149) 00:17:09.315 fused_ordering(150) 00:17:09.315 fused_ordering(151) 00:17:09.315 fused_ordering(152) 00:17:09.315 fused_ordering(153) 00:17:09.315 fused_ordering(154) 00:17:09.315 fused_ordering(155) 00:17:09.315 fused_ordering(156) 00:17:09.315 fused_ordering(157) 00:17:09.315 fused_ordering(158) 00:17:09.315 fused_ordering(159) 00:17:09.315 fused_ordering(160) 00:17:09.315 fused_ordering(161) 00:17:09.315 fused_ordering(162) 00:17:09.315 fused_ordering(163) 00:17:09.315 fused_ordering(164) 00:17:09.315 fused_ordering(165) 00:17:09.315 fused_ordering(166) 00:17:09.315 fused_ordering(167) 00:17:09.315 fused_ordering(168) 00:17:09.315 fused_ordering(169) 00:17:09.315 fused_ordering(170) 00:17:09.315 fused_ordering(171) 00:17:09.315 fused_ordering(172) 00:17:09.315 fused_ordering(173) 00:17:09.315 fused_ordering(174) 00:17:09.315 fused_ordering(175) 00:17:09.315 fused_ordering(176) 00:17:09.315 fused_ordering(177) 00:17:09.315 fused_ordering(178) 00:17:09.315 fused_ordering(179) 00:17:09.315 fused_ordering(180) 00:17:09.315 fused_ordering(181) 00:17:09.315 fused_ordering(182) 00:17:09.315 fused_ordering(183) 00:17:09.315 fused_ordering(184) 00:17:09.315 fused_ordering(185) 00:17:09.315 fused_ordering(186) 00:17:09.315 fused_ordering(187) 00:17:09.315 fused_ordering(188) 00:17:09.315 fused_ordering(189) 00:17:09.315 fused_ordering(190) 00:17:09.315 fused_ordering(191) 00:17:09.315 fused_ordering(192) 00:17:09.315 fused_ordering(193) 00:17:09.315 fused_ordering(194) 00:17:09.315 fused_ordering(195) 00:17:09.315 fused_ordering(196) 00:17:09.315 fused_ordering(197) 00:17:09.315 fused_ordering(198) 00:17:09.315 fused_ordering(199) 00:17:09.315 fused_ordering(200) 00:17:09.315 fused_ordering(201) 00:17:09.315 fused_ordering(202) 00:17:09.315 fused_ordering(203) 00:17:09.315 fused_ordering(204) 00:17:09.315 fused_ordering(205) 00:17:09.572 fused_ordering(206) 00:17:09.572 fused_ordering(207) 00:17:09.572 fused_ordering(208) 00:17:09.572 fused_ordering(209) 00:17:09.572 fused_ordering(210) 00:17:09.572 fused_ordering(211) 00:17:09.573 fused_ordering(212) 00:17:09.573 fused_ordering(213) 00:17:09.573 fused_ordering(214) 00:17:09.573 fused_ordering(215) 00:17:09.573 fused_ordering(216) 00:17:09.573 fused_ordering(217) 00:17:09.573 fused_ordering(218) 00:17:09.573 fused_ordering(219) 00:17:09.573 fused_ordering(220) 00:17:09.573 fused_ordering(221) 00:17:09.573 fused_ordering(222) 00:17:09.573 fused_ordering(223) 00:17:09.573 fused_ordering(224) 00:17:09.573 fused_ordering(225) 00:17:09.573 fused_ordering(226) 00:17:09.573 fused_ordering(227) 00:17:09.573 fused_ordering(228) 00:17:09.573 fused_ordering(229) 00:17:09.573 fused_ordering(230) 00:17:09.573 fused_ordering(231) 00:17:09.573 fused_ordering(232) 00:17:09.573 fused_ordering(233) 00:17:09.573 fused_ordering(234) 00:17:09.573 fused_ordering(235) 00:17:09.573 fused_ordering(236) 00:17:09.573 fused_ordering(237) 00:17:09.573 fused_ordering(238) 00:17:09.573 fused_ordering(239) 00:17:09.573 fused_ordering(240) 00:17:09.573 fused_ordering(241) 00:17:09.573 fused_ordering(242) 00:17:09.573 fused_ordering(243) 00:17:09.573 fused_ordering(244) 00:17:09.573 fused_ordering(245) 00:17:09.573 fused_ordering(246) 00:17:09.573 fused_ordering(247) 00:17:09.573 fused_ordering(248) 00:17:09.573 fused_ordering(249) 00:17:09.573 fused_ordering(250) 00:17:09.573 fused_ordering(251) 00:17:09.573 fused_ordering(252) 00:17:09.573 fused_ordering(253) 00:17:09.573 fused_ordering(254) 00:17:09.573 fused_ordering(255) 00:17:09.573 fused_ordering(256) 00:17:09.573 fused_ordering(257) 00:17:09.573 fused_ordering(258) 00:17:09.573 fused_ordering(259) 00:17:09.573 fused_ordering(260) 00:17:09.573 fused_ordering(261) 00:17:09.573 fused_ordering(262) 00:17:09.573 fused_ordering(263) 00:17:09.573 fused_ordering(264) 00:17:09.573 fused_ordering(265) 00:17:09.573 fused_ordering(266) 00:17:09.573 fused_ordering(267) 00:17:09.573 fused_ordering(268) 00:17:09.573 fused_ordering(269) 00:17:09.573 fused_ordering(270) 00:17:09.573 fused_ordering(271) 00:17:09.573 fused_ordering(272) 00:17:09.573 fused_ordering(273) 00:17:09.573 fused_ordering(274) 00:17:09.573 fused_ordering(275) 00:17:09.573 fused_ordering(276) 00:17:09.573 fused_ordering(277) 00:17:09.573 fused_ordering(278) 00:17:09.573 fused_ordering(279) 00:17:09.573 fused_ordering(280) 00:17:09.573 fused_ordering(281) 00:17:09.573 fused_ordering(282) 00:17:09.573 fused_ordering(283) 00:17:09.573 fused_ordering(284) 00:17:09.573 fused_ordering(285) 00:17:09.573 fused_ordering(286) 00:17:09.573 fused_ordering(287) 00:17:09.573 fused_ordering(288) 00:17:09.573 fused_ordering(289) 00:17:09.573 fused_ordering(290) 00:17:09.573 fused_ordering(291) 00:17:09.573 fused_ordering(292) 00:17:09.573 fused_ordering(293) 00:17:09.573 fused_ordering(294) 00:17:09.573 fused_ordering(295) 00:17:09.573 fused_ordering(296) 00:17:09.573 fused_ordering(297) 00:17:09.573 fused_ordering(298) 00:17:09.573 fused_ordering(299) 00:17:09.573 fused_ordering(300) 00:17:09.573 fused_ordering(301) 00:17:09.573 fused_ordering(302) 00:17:09.573 fused_ordering(303) 00:17:09.573 fused_ordering(304) 00:17:09.573 fused_ordering(305) 00:17:09.573 fused_ordering(306) 00:17:09.573 fused_ordering(307) 00:17:09.573 fused_ordering(308) 00:17:09.573 fused_ordering(309) 00:17:09.573 fused_ordering(310) 00:17:09.573 fused_ordering(311) 00:17:09.573 fused_ordering(312) 00:17:09.573 fused_ordering(313) 00:17:09.573 fused_ordering(314) 00:17:09.573 fused_ordering(315) 00:17:09.573 fused_ordering(316) 00:17:09.573 fused_ordering(317) 00:17:09.573 fused_ordering(318) 00:17:09.573 fused_ordering(319) 00:17:09.573 fused_ordering(320) 00:17:09.573 fused_ordering(321) 00:17:09.573 fused_ordering(322) 00:17:09.573 fused_ordering(323) 00:17:09.573 fused_ordering(324) 00:17:09.573 fused_ordering(325) 00:17:09.573 fused_ordering(326) 00:17:09.573 fused_ordering(327) 00:17:09.573 fused_ordering(328) 00:17:09.573 fused_ordering(329) 00:17:09.573 fused_ordering(330) 00:17:09.573 fused_ordering(331) 00:17:09.573 fused_ordering(332) 00:17:09.573 fused_ordering(333) 00:17:09.573 fused_ordering(334) 00:17:09.573 fused_ordering(335) 00:17:09.573 fused_ordering(336) 00:17:09.573 fused_ordering(337) 00:17:09.573 fused_ordering(338) 00:17:09.573 fused_ordering(339) 00:17:09.573 fused_ordering(340) 00:17:09.573 fused_ordering(341) 00:17:09.573 fused_ordering(342) 00:17:09.573 fused_ordering(343) 00:17:09.573 fused_ordering(344) 00:17:09.573 fused_ordering(345) 00:17:09.573 fused_ordering(346) 00:17:09.573 fused_ordering(347) 00:17:09.573 fused_ordering(348) 00:17:09.573 fused_ordering(349) 00:17:09.573 fused_ordering(350) 00:17:09.573 fused_ordering(351) 00:17:09.573 fused_ordering(352) 00:17:09.573 fused_ordering(353) 00:17:09.573 fused_ordering(354) 00:17:09.573 fused_ordering(355) 00:17:09.573 fused_ordering(356) 00:17:09.573 fused_ordering(357) 00:17:09.573 fused_ordering(358) 00:17:09.573 fused_ordering(359) 00:17:09.573 fused_ordering(360) 00:17:09.573 fused_ordering(361) 00:17:09.573 fused_ordering(362) 00:17:09.573 fused_ordering(363) 00:17:09.573 fused_ordering(364) 00:17:09.573 fused_ordering(365) 00:17:09.573 fused_ordering(366) 00:17:09.573 fused_ordering(367) 00:17:09.573 fused_ordering(368) 00:17:09.573 fused_ordering(369) 00:17:09.573 fused_ordering(370) 00:17:09.573 fused_ordering(371) 00:17:09.573 fused_ordering(372) 00:17:09.573 fused_ordering(373) 00:17:09.573 fused_ordering(374) 00:17:09.573 fused_ordering(375) 00:17:09.573 fused_ordering(376) 00:17:09.573 fused_ordering(377) 00:17:09.573 fused_ordering(378) 00:17:09.573 fused_ordering(379) 00:17:09.573 fused_ordering(380) 00:17:09.573 fused_ordering(381) 00:17:09.573 fused_ordering(382) 00:17:09.573 fused_ordering(383) 00:17:09.573 fused_ordering(384) 00:17:09.573 fused_ordering(385) 00:17:09.573 fused_ordering(386) 00:17:09.573 fused_ordering(387) 00:17:09.573 fused_ordering(388) 00:17:09.573 fused_ordering(389) 00:17:09.573 fused_ordering(390) 00:17:09.573 fused_ordering(391) 00:17:09.573 fused_ordering(392) 00:17:09.573 fused_ordering(393) 00:17:09.573 fused_ordering(394) 00:17:09.573 fused_ordering(395) 00:17:09.573 fused_ordering(396) 00:17:09.573 fused_ordering(397) 00:17:09.573 fused_ordering(398) 00:17:09.573 fused_ordering(399) 00:17:09.573 fused_ordering(400) 00:17:09.573 fused_ordering(401) 00:17:09.573 fused_ordering(402) 00:17:09.573 fused_ordering(403) 00:17:09.573 fused_ordering(404) 00:17:09.573 fused_ordering(405) 00:17:09.573 fused_ordering(406) 00:17:09.573 fused_ordering(407) 00:17:09.573 fused_ordering(408) 00:17:09.573 fused_ordering(409) 00:17:09.573 fused_ordering(410) 00:17:10.136 fused_ordering(411) 00:17:10.136 fused_ordering(412) 00:17:10.136 fused_ordering(413) 00:17:10.136 fused_ordering(414) 00:17:10.136 fused_ordering(415) 00:17:10.136 fused_ordering(416) 00:17:10.136 fused_ordering(417) 00:17:10.136 fused_ordering(418) 00:17:10.136 fused_ordering(419) 00:17:10.136 fused_ordering(420) 00:17:10.136 fused_ordering(421) 00:17:10.136 fused_ordering(422) 00:17:10.136 fused_ordering(423) 00:17:10.136 fused_ordering(424) 00:17:10.136 fused_ordering(425) 00:17:10.136 fused_ordering(426) 00:17:10.136 fused_ordering(427) 00:17:10.136 fused_ordering(428) 00:17:10.136 fused_ordering(429) 00:17:10.136 fused_ordering(430) 00:17:10.136 fused_ordering(431) 00:17:10.136 fused_ordering(432) 00:17:10.136 fused_ordering(433) 00:17:10.136 fused_ordering(434) 00:17:10.136 fused_ordering(435) 00:17:10.136 fused_ordering(436) 00:17:10.136 fused_ordering(437) 00:17:10.136 fused_ordering(438) 00:17:10.136 fused_ordering(439) 00:17:10.136 fused_ordering(440) 00:17:10.136 fused_ordering(441) 00:17:10.136 fused_ordering(442) 00:17:10.136 fused_ordering(443) 00:17:10.136 fused_ordering(444) 00:17:10.136 fused_ordering(445) 00:17:10.136 fused_ordering(446) 00:17:10.136 fused_ordering(447) 00:17:10.136 fused_ordering(448) 00:17:10.136 fused_ordering(449) 00:17:10.136 fused_ordering(450) 00:17:10.136 fused_ordering(451) 00:17:10.136 fused_ordering(452) 00:17:10.136 fused_ordering(453) 00:17:10.136 fused_ordering(454) 00:17:10.136 fused_ordering(455) 00:17:10.136 fused_ordering(456) 00:17:10.136 fused_ordering(457) 00:17:10.136 fused_ordering(458) 00:17:10.136 fused_ordering(459) 00:17:10.136 fused_ordering(460) 00:17:10.136 fused_ordering(461) 00:17:10.136 fused_ordering(462) 00:17:10.136 fused_ordering(463) 00:17:10.136 fused_ordering(464) 00:17:10.136 fused_ordering(465) 00:17:10.136 fused_ordering(466) 00:17:10.136 fused_ordering(467) 00:17:10.136 fused_ordering(468) 00:17:10.136 fused_ordering(469) 00:17:10.136 fused_ordering(470) 00:17:10.136 fused_ordering(471) 00:17:10.136 fused_ordering(472) 00:17:10.136 fused_ordering(473) 00:17:10.136 fused_ordering(474) 00:17:10.136 fused_ordering(475) 00:17:10.136 fused_ordering(476) 00:17:10.136 fused_ordering(477) 00:17:10.136 fused_ordering(478) 00:17:10.136 fused_ordering(479) 00:17:10.136 fused_ordering(480) 00:17:10.136 fused_ordering(481) 00:17:10.136 fused_ordering(482) 00:17:10.136 fused_ordering(483) 00:17:10.136 fused_ordering(484) 00:17:10.136 fused_ordering(485) 00:17:10.136 fused_ordering(486) 00:17:10.136 fused_ordering(487) 00:17:10.136 fused_ordering(488) 00:17:10.136 fused_ordering(489) 00:17:10.136 fused_ordering(490) 00:17:10.136 fused_ordering(491) 00:17:10.136 fused_ordering(492) 00:17:10.136 fused_ordering(493) 00:17:10.136 fused_ordering(494) 00:17:10.136 fused_ordering(495) 00:17:10.136 fused_ordering(496) 00:17:10.136 fused_ordering(497) 00:17:10.137 fused_ordering(498) 00:17:10.137 fused_ordering(499) 00:17:10.137 fused_ordering(500) 00:17:10.137 fused_ordering(501) 00:17:10.137 fused_ordering(502) 00:17:10.137 fused_ordering(503) 00:17:10.137 fused_ordering(504) 00:17:10.137 fused_ordering(505) 00:17:10.137 fused_ordering(506) 00:17:10.137 fused_ordering(507) 00:17:10.137 fused_ordering(508) 00:17:10.137 fused_ordering(509) 00:17:10.137 fused_ordering(510) 00:17:10.137 fused_ordering(511) 00:17:10.137 fused_ordering(512) 00:17:10.137 fused_ordering(513) 00:17:10.137 fused_ordering(514) 00:17:10.137 fused_ordering(515) 00:17:10.137 fused_ordering(516) 00:17:10.137 fused_ordering(517) 00:17:10.137 fused_ordering(518) 00:17:10.137 fused_ordering(519) 00:17:10.137 fused_ordering(520) 00:17:10.137 fused_ordering(521) 00:17:10.137 fused_ordering(522) 00:17:10.137 fused_ordering(523) 00:17:10.137 fused_ordering(524) 00:17:10.137 fused_ordering(525) 00:17:10.137 fused_ordering(526) 00:17:10.137 fused_ordering(527) 00:17:10.137 fused_ordering(528) 00:17:10.137 fused_ordering(529) 00:17:10.137 fused_ordering(530) 00:17:10.137 fused_ordering(531) 00:17:10.137 fused_ordering(532) 00:17:10.137 fused_ordering(533) 00:17:10.137 fused_ordering(534) 00:17:10.137 fused_ordering(535) 00:17:10.137 fused_ordering(536) 00:17:10.137 fused_ordering(537) 00:17:10.137 fused_ordering(538) 00:17:10.137 fused_ordering(539) 00:17:10.137 fused_ordering(540) 00:17:10.137 fused_ordering(541) 00:17:10.137 fused_ordering(542) 00:17:10.137 fused_ordering(543) 00:17:10.137 fused_ordering(544) 00:17:10.137 fused_ordering(545) 00:17:10.137 fused_ordering(546) 00:17:10.137 fused_ordering(547) 00:17:10.137 fused_ordering(548) 00:17:10.137 fused_ordering(549) 00:17:10.137 fused_ordering(550) 00:17:10.137 fused_ordering(551) 00:17:10.137 fused_ordering(552) 00:17:10.137 fused_ordering(553) 00:17:10.137 fused_ordering(554) 00:17:10.137 fused_ordering(555) 00:17:10.137 fused_ordering(556) 00:17:10.137 fused_ordering(557) 00:17:10.137 fused_ordering(558) 00:17:10.137 fused_ordering(559) 00:17:10.137 fused_ordering(560) 00:17:10.137 fused_ordering(561) 00:17:10.137 fused_ordering(562) 00:17:10.137 fused_ordering(563) 00:17:10.137 fused_ordering(564) 00:17:10.137 fused_ordering(565) 00:17:10.137 fused_ordering(566) 00:17:10.137 fused_ordering(567) 00:17:10.137 fused_ordering(568) 00:17:10.137 fused_ordering(569) 00:17:10.137 fused_ordering(570) 00:17:10.137 fused_ordering(571) 00:17:10.137 fused_ordering(572) 00:17:10.137 fused_ordering(573) 00:17:10.137 fused_ordering(574) 00:17:10.137 fused_ordering(575) 00:17:10.137 fused_ordering(576) 00:17:10.137 fused_ordering(577) 00:17:10.137 fused_ordering(578) 00:17:10.137 fused_ordering(579) 00:17:10.137 fused_ordering(580) 00:17:10.137 fused_ordering(581) 00:17:10.137 fused_ordering(582) 00:17:10.137 fused_ordering(583) 00:17:10.137 fused_ordering(584) 00:17:10.137 fused_ordering(585) 00:17:10.137 fused_ordering(586) 00:17:10.137 fused_ordering(587) 00:17:10.137 fused_ordering(588) 00:17:10.137 fused_ordering(589) 00:17:10.137 fused_ordering(590) 00:17:10.137 fused_ordering(591) 00:17:10.137 fused_ordering(592) 00:17:10.137 fused_ordering(593) 00:17:10.137 fused_ordering(594) 00:17:10.137 fused_ordering(595) 00:17:10.137 fused_ordering(596) 00:17:10.137 fused_ordering(597) 00:17:10.137 fused_ordering(598) 00:17:10.137 fused_ordering(599) 00:17:10.137 fused_ordering(600) 00:17:10.137 fused_ordering(601) 00:17:10.137 fused_ordering(602) 00:17:10.137 fused_ordering(603) 00:17:10.137 fused_ordering(604) 00:17:10.137 fused_ordering(605) 00:17:10.137 fused_ordering(606) 00:17:10.137 fused_ordering(607) 00:17:10.137 fused_ordering(608) 00:17:10.137 fused_ordering(609) 00:17:10.137 fused_ordering(610) 00:17:10.137 fused_ordering(611) 00:17:10.137 fused_ordering(612) 00:17:10.137 fused_ordering(613) 00:17:10.137 fused_ordering(614) 00:17:10.137 fused_ordering(615) 00:17:10.701 fused_ordering(616) 00:17:10.701 fused_ordering(617) 00:17:10.701 fused_ordering(618) 00:17:10.701 fused_ordering(619) 00:17:10.701 fused_ordering(620) 00:17:10.701 fused_ordering(621) 00:17:10.701 fused_ordering(622) 00:17:10.701 fused_ordering(623) 00:17:10.701 fused_ordering(624) 00:17:10.701 fused_ordering(625) 00:17:10.701 fused_ordering(626) 00:17:10.701 fused_ordering(627) 00:17:10.701 fused_ordering(628) 00:17:10.701 fused_ordering(629) 00:17:10.701 fused_ordering(630) 00:17:10.701 fused_ordering(631) 00:17:10.701 fused_ordering(632) 00:17:10.701 fused_ordering(633) 00:17:10.701 fused_ordering(634) 00:17:10.701 fused_ordering(635) 00:17:10.701 fused_ordering(636) 00:17:10.701 fused_ordering(637) 00:17:10.701 fused_ordering(638) 00:17:10.701 fused_ordering(639) 00:17:10.701 fused_ordering(640) 00:17:10.701 fused_ordering(641) 00:17:10.701 fused_ordering(642) 00:17:10.701 fused_ordering(643) 00:17:10.701 fused_ordering(644) 00:17:10.701 fused_ordering(645) 00:17:10.701 fused_ordering(646) 00:17:10.701 fused_ordering(647) 00:17:10.701 fused_ordering(648) 00:17:10.701 fused_ordering(649) 00:17:10.701 fused_ordering(650) 00:17:10.701 fused_ordering(651) 00:17:10.701 fused_ordering(652) 00:17:10.701 fused_ordering(653) 00:17:10.701 fused_ordering(654) 00:17:10.701 fused_ordering(655) 00:17:10.701 fused_ordering(656) 00:17:10.701 fused_ordering(657) 00:17:10.701 fused_ordering(658) 00:17:10.701 fused_ordering(659) 00:17:10.701 fused_ordering(660) 00:17:10.701 fused_ordering(661) 00:17:10.701 fused_ordering(662) 00:17:10.701 fused_ordering(663) 00:17:10.701 fused_ordering(664) 00:17:10.701 fused_ordering(665) 00:17:10.701 fused_ordering(666) 00:17:10.701 fused_ordering(667) 00:17:10.701 fused_ordering(668) 00:17:10.701 fused_ordering(669) 00:17:10.701 fused_ordering(670) 00:17:10.701 fused_ordering(671) 00:17:10.701 fused_ordering(672) 00:17:10.701 fused_ordering(673) 00:17:10.701 fused_ordering(674) 00:17:10.701 fused_ordering(675) 00:17:10.701 fused_ordering(676) 00:17:10.701 fused_ordering(677) 00:17:10.701 fused_ordering(678) 00:17:10.701 fused_ordering(679) 00:17:10.701 fused_ordering(680) 00:17:10.701 fused_ordering(681) 00:17:10.701 fused_ordering(682) 00:17:10.701 fused_ordering(683) 00:17:10.701 fused_ordering(684) 00:17:10.701 fused_ordering(685) 00:17:10.701 fused_ordering(686) 00:17:10.701 fused_ordering(687) 00:17:10.701 fused_ordering(688) 00:17:10.701 fused_ordering(689) 00:17:10.701 fused_ordering(690) 00:17:10.701 fused_ordering(691) 00:17:10.701 fused_ordering(692) 00:17:10.701 fused_ordering(693) 00:17:10.701 fused_ordering(694) 00:17:10.701 fused_ordering(695) 00:17:10.701 fused_ordering(696) 00:17:10.701 fused_ordering(697) 00:17:10.701 fused_ordering(698) 00:17:10.701 fused_ordering(699) 00:17:10.701 fused_ordering(700) 00:17:10.701 fused_ordering(701) 00:17:10.701 fused_ordering(702) 00:17:10.701 fused_ordering(703) 00:17:10.701 fused_ordering(704) 00:17:10.701 fused_ordering(705) 00:17:10.701 fused_ordering(706) 00:17:10.701 fused_ordering(707) 00:17:10.701 fused_ordering(708) 00:17:10.701 fused_ordering(709) 00:17:10.701 fused_ordering(710) 00:17:10.701 fused_ordering(711) 00:17:10.701 fused_ordering(712) 00:17:10.701 fused_ordering(713) 00:17:10.701 fused_ordering(714) 00:17:10.701 fused_ordering(715) 00:17:10.701 fused_ordering(716) 00:17:10.701 fused_ordering(717) 00:17:10.701 fused_ordering(718) 00:17:10.701 fused_ordering(719) 00:17:10.701 fused_ordering(720) 00:17:10.701 fused_ordering(721) 00:17:10.701 fused_ordering(722) 00:17:10.701 fused_ordering(723) 00:17:10.701 fused_ordering(724) 00:17:10.701 fused_ordering(725) 00:17:10.701 fused_ordering(726) 00:17:10.701 fused_ordering(727) 00:17:10.701 fused_ordering(728) 00:17:10.701 fused_ordering(729) 00:17:10.701 fused_ordering(730) 00:17:10.701 fused_ordering(731) 00:17:10.701 fused_ordering(732) 00:17:10.701 fused_ordering(733) 00:17:10.701 fused_ordering(734) 00:17:10.701 fused_ordering(735) 00:17:10.701 fused_ordering(736) 00:17:10.701 fused_ordering(737) 00:17:10.701 fused_ordering(738) 00:17:10.701 fused_ordering(739) 00:17:10.701 fused_ordering(740) 00:17:10.701 fused_ordering(741) 00:17:10.701 fused_ordering(742) 00:17:10.701 fused_ordering(743) 00:17:10.701 fused_ordering(744) 00:17:10.701 fused_ordering(745) 00:17:10.701 fused_ordering(746) 00:17:10.701 fused_ordering(747) 00:17:10.701 fused_ordering(748) 00:17:10.701 fused_ordering(749) 00:17:10.701 fused_ordering(750) 00:17:10.701 fused_ordering(751) 00:17:10.701 fused_ordering(752) 00:17:10.701 fused_ordering(753) 00:17:10.701 fused_ordering(754) 00:17:10.701 fused_ordering(755) 00:17:10.701 fused_ordering(756) 00:17:10.701 fused_ordering(757) 00:17:10.701 fused_ordering(758) 00:17:10.701 fused_ordering(759) 00:17:10.701 fused_ordering(760) 00:17:10.701 fused_ordering(761) 00:17:10.701 fused_ordering(762) 00:17:10.701 fused_ordering(763) 00:17:10.701 fused_ordering(764) 00:17:10.701 fused_ordering(765) 00:17:10.701 fused_ordering(766) 00:17:10.701 fused_ordering(767) 00:17:10.701 fused_ordering(768) 00:17:10.701 fused_ordering(769) 00:17:10.701 fused_ordering(770) 00:17:10.701 fused_ordering(771) 00:17:10.701 fused_ordering(772) 00:17:10.701 fused_ordering(773) 00:17:10.701 fused_ordering(774) 00:17:10.701 fused_ordering(775) 00:17:10.701 fused_ordering(776) 00:17:10.701 fused_ordering(777) 00:17:10.701 fused_ordering(778) 00:17:10.701 fused_ordering(779) 00:17:10.701 fused_ordering(780) 00:17:10.701 fused_ordering(781) 00:17:10.701 fused_ordering(782) 00:17:10.701 fused_ordering(783) 00:17:10.701 fused_ordering(784) 00:17:10.701 fused_ordering(785) 00:17:10.701 fused_ordering(786) 00:17:10.701 fused_ordering(787) 00:17:10.701 fused_ordering(788) 00:17:10.701 fused_ordering(789) 00:17:10.701 fused_ordering(790) 00:17:10.701 fused_ordering(791) 00:17:10.701 fused_ordering(792) 00:17:10.701 fused_ordering(793) 00:17:10.701 fused_ordering(794) 00:17:10.701 fused_ordering(795) 00:17:10.701 fused_ordering(796) 00:17:10.701 fused_ordering(797) 00:17:10.701 fused_ordering(798) 00:17:10.701 fused_ordering(799) 00:17:10.701 fused_ordering(800) 00:17:10.701 fused_ordering(801) 00:17:10.701 fused_ordering(802) 00:17:10.701 fused_ordering(803) 00:17:10.701 fused_ordering(804) 00:17:10.701 fused_ordering(805) 00:17:10.701 fused_ordering(806) 00:17:10.701 fused_ordering(807) 00:17:10.701 fused_ordering(808) 00:17:10.701 fused_ordering(809) 00:17:10.701 fused_ordering(810) 00:17:10.701 fused_ordering(811) 00:17:10.701 fused_ordering(812) 00:17:10.701 fused_ordering(813) 00:17:10.701 fused_ordering(814) 00:17:10.701 fused_ordering(815) 00:17:10.701 fused_ordering(816) 00:17:10.701 fused_ordering(817) 00:17:10.701 fused_ordering(818) 00:17:10.701 fused_ordering(819) 00:17:10.701 fused_ordering(820) 00:17:11.264 fused_ordering(821) 00:17:11.264 fused_ordering(822) 00:17:11.264 fused_ordering(823) 00:17:11.264 fused_ordering(824) 00:17:11.264 fused_ordering(825) 00:17:11.264 fused_ordering(826) 00:17:11.264 fused_ordering(827) 00:17:11.264 fused_ordering(828) 00:17:11.264 fused_ordering(829) 00:17:11.264 fused_ordering(830) 00:17:11.264 fused_ordering(831) 00:17:11.264 fused_ordering(832) 00:17:11.264 fused_ordering(833) 00:17:11.264 fused_ordering(834) 00:17:11.264 fused_ordering(835) 00:17:11.264 fused_ordering(836) 00:17:11.264 fused_ordering(837) 00:17:11.264 fused_ordering(838) 00:17:11.264 fused_ordering(839) 00:17:11.264 fused_ordering(840) 00:17:11.264 fused_ordering(841) 00:17:11.264 fused_ordering(842) 00:17:11.264 fused_ordering(843) 00:17:11.264 fused_ordering(844) 00:17:11.264 fused_ordering(845) 00:17:11.264 fused_ordering(846) 00:17:11.264 fused_ordering(847) 00:17:11.264 fused_ordering(848) 00:17:11.264 fused_ordering(849) 00:17:11.264 fused_ordering(850) 00:17:11.264 fused_ordering(851) 00:17:11.264 fused_ordering(852) 00:17:11.264 fused_ordering(853) 00:17:11.264 fused_ordering(854) 00:17:11.264 fused_ordering(855) 00:17:11.264 fused_ordering(856) 00:17:11.264 fused_ordering(857) 00:17:11.264 fused_ordering(858) 00:17:11.264 fused_ordering(859) 00:17:11.264 fused_ordering(860) 00:17:11.264 fused_ordering(861) 00:17:11.264 fused_ordering(862) 00:17:11.264 fused_ordering(863) 00:17:11.264 fused_ordering(864) 00:17:11.264 fused_ordering(865) 00:17:11.264 fused_ordering(866) 00:17:11.264 fused_ordering(867) 00:17:11.264 fused_ordering(868) 00:17:11.264 fused_ordering(869) 00:17:11.264 fused_ordering(870) 00:17:11.264 fused_ordering(871) 00:17:11.264 fused_ordering(872) 00:17:11.264 fused_ordering(873) 00:17:11.264 fused_ordering(874) 00:17:11.264 fused_ordering(875) 00:17:11.264 fused_ordering(876) 00:17:11.264 fused_ordering(877) 00:17:11.264 fused_ordering(878) 00:17:11.264 fused_ordering(879) 00:17:11.264 fused_ordering(880) 00:17:11.264 fused_ordering(881) 00:17:11.264 fused_ordering(882) 00:17:11.264 fused_ordering(883) 00:17:11.264 fused_ordering(884) 00:17:11.264 fused_ordering(885) 00:17:11.264 fused_ordering(886) 00:17:11.264 fused_ordering(887) 00:17:11.264 fused_ordering(888) 00:17:11.264 fused_ordering(889) 00:17:11.264 fused_ordering(890) 00:17:11.264 fused_ordering(891) 00:17:11.264 fused_ordering(892) 00:17:11.264 fused_ordering(893) 00:17:11.264 fused_ordering(894) 00:17:11.264 fused_ordering(895) 00:17:11.265 fused_ordering(896) 00:17:11.265 fused_ordering(897) 00:17:11.265 fused_ordering(898) 00:17:11.265 fused_ordering(899) 00:17:11.265 fused_ordering(900) 00:17:11.265 fused_ordering(901) 00:17:11.265 fused_ordering(902) 00:17:11.265 fused_ordering(903) 00:17:11.265 fused_ordering(904) 00:17:11.265 fused_ordering(905) 00:17:11.265 fused_ordering(906) 00:17:11.265 fused_ordering(907) 00:17:11.265 fused_ordering(908) 00:17:11.265 fused_ordering(909) 00:17:11.265 fused_ordering(910) 00:17:11.265 fused_ordering(911) 00:17:11.265 fused_ordering(912) 00:17:11.265 fused_ordering(913) 00:17:11.265 fused_ordering(914) 00:17:11.265 fused_ordering(915) 00:17:11.265 fused_ordering(916) 00:17:11.265 fused_ordering(917) 00:17:11.265 fused_ordering(918) 00:17:11.265 fused_ordering(919) 00:17:11.265 fused_ordering(920) 00:17:11.265 fused_ordering(921) 00:17:11.265 fused_ordering(922) 00:17:11.265 fused_ordering(923) 00:17:11.265 fused_ordering(924) 00:17:11.265 fused_ordering(925) 00:17:11.265 fused_ordering(926) 00:17:11.265 fused_ordering(927) 00:17:11.265 fused_ordering(928) 00:17:11.265 fused_ordering(929) 00:17:11.265 fused_ordering(930) 00:17:11.265 fused_ordering(931) 00:17:11.265 fused_ordering(932) 00:17:11.265 fused_ordering(933) 00:17:11.265 fused_ordering(934) 00:17:11.265 fused_ordering(935) 00:17:11.265 fused_ordering(936) 00:17:11.265 fused_ordering(937) 00:17:11.265 fused_ordering(938) 00:17:11.265 fused_ordering(939) 00:17:11.265 fused_ordering(940) 00:17:11.265 fused_ordering(941) 00:17:11.265 fused_ordering(942) 00:17:11.265 fused_ordering(943) 00:17:11.265 fused_ordering(944) 00:17:11.265 fused_ordering(945) 00:17:11.265 fused_ordering(946) 00:17:11.265 fused_ordering(947) 00:17:11.265 fused_ordering(948) 00:17:11.265 fused_ordering(949) 00:17:11.265 fused_ordering(950) 00:17:11.265 fused_ordering(951) 00:17:11.265 fused_ordering(952) 00:17:11.265 fused_ordering(953) 00:17:11.265 fused_ordering(954) 00:17:11.265 fused_ordering(955) 00:17:11.265 fused_ordering(956) 00:17:11.265 fused_ordering(957) 00:17:11.265 fused_ordering(958) 00:17:11.265 fused_ordering(959) 00:17:11.265 fused_ordering(960) 00:17:11.265 fused_ordering(961) 00:17:11.265 fused_ordering(962) 00:17:11.265 fused_ordering(963) 00:17:11.265 fused_ordering(964) 00:17:11.265 fused_ordering(965) 00:17:11.265 fused_ordering(966) 00:17:11.265 fused_ordering(967) 00:17:11.265 fused_ordering(968) 00:17:11.265 fused_ordering(969) 00:17:11.265 fused_ordering(970) 00:17:11.265 fused_ordering(971) 00:17:11.265 fused_ordering(972) 00:17:11.265 fused_ordering(973) 00:17:11.265 fused_ordering(974) 00:17:11.265 fused_ordering(975) 00:17:11.265 fused_ordering(976) 00:17:11.265 fused_ordering(977) 00:17:11.265 fused_ordering(978) 00:17:11.265 fused_ordering(979) 00:17:11.265 fused_ordering(980) 00:17:11.265 fused_ordering(981) 00:17:11.265 fused_ordering(982) 00:17:11.265 fused_ordering(983) 00:17:11.265 fused_ordering(984) 00:17:11.265 fused_ordering(985) 00:17:11.265 fused_ordering(986) 00:17:11.265 fused_ordering(987) 00:17:11.265 fused_ordering(988) 00:17:11.265 fused_ordering(989) 00:17:11.265 fused_ordering(990) 00:17:11.265 fused_ordering(991) 00:17:11.265 fused_ordering(992) 00:17:11.265 fused_ordering(993) 00:17:11.265 fused_ordering(994) 00:17:11.265 fused_ordering(995) 00:17:11.265 fused_ordering(996) 00:17:11.265 fused_ordering(997) 00:17:11.265 fused_ordering(998) 00:17:11.265 fused_ordering(999) 00:17:11.265 fused_ordering(1000) 00:17:11.265 fused_ordering(1001) 00:17:11.265 fused_ordering(1002) 00:17:11.265 fused_ordering(1003) 00:17:11.265 fused_ordering(1004) 00:17:11.265 fused_ordering(1005) 00:17:11.265 fused_ordering(1006) 00:17:11.265 fused_ordering(1007) 00:17:11.265 fused_ordering(1008) 00:17:11.265 fused_ordering(1009) 00:17:11.265 fused_ordering(1010) 00:17:11.265 fused_ordering(1011) 00:17:11.265 fused_ordering(1012) 00:17:11.265 fused_ordering(1013) 00:17:11.265 fused_ordering(1014) 00:17:11.265 fused_ordering(1015) 00:17:11.265 fused_ordering(1016) 00:17:11.265 fused_ordering(1017) 00:17:11.265 fused_ordering(1018) 00:17:11.265 fused_ordering(1019) 00:17:11.265 fused_ordering(1020) 00:17:11.265 fused_ordering(1021) 00:17:11.265 fused_ordering(1022) 00:17:11.265 fused_ordering(1023) 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.265 rmmod nvme_tcp 00:17:11.265 rmmod nvme_fabrics 00:17:11.265 rmmod nvme_keyring 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4071004 ']' 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4071004 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4071004 ']' 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4071004 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.265 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4071004 00:17:11.266 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:11.266 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:11.266 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4071004' 00:17:11.266 killing process with pid 4071004 00:17:11.266 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4071004 00:17:11.266 08:13:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4071004 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.524 08:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.053 08:13:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.053 00:17:14.053 real 0m7.450s 00:17:14.053 user 0m5.200s 00:17:14.053 sys 0m3.127s 00:17:14.053 08:13:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.053 08:13:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 ************************************ 00:17:14.053 END TEST nvmf_fused_ordering 00:17:14.053 ************************************ 00:17:14.053 08:13:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.053 08:13:23 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:14.053 08:13:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.053 08:13:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.053 08:13:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 ************************************ 00:17:14.053 START TEST nvmf_delete_subsystem 00:17:14.053 ************************************ 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:14.053 * Looking for test storage... 00:17:14.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.053 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.054 08:13:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:15.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:15.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:15.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:15.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:15.953 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:15.954 00:17:15.954 --- 10.0.0.2 ping statistics --- 00:17:15.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.954 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:17:15.954 00:17:15.954 --- 10.0.0.1 ping statistics --- 00:17:15.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.954 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4073343 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4073343 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4073343 ']' 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.954 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:15.954 [2024-07-21 08:13:25.363998] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:15.954 [2024-07-21 08:13:25.364086] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.954 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.954 [2024-07-21 08:13:25.433242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.954 [2024-07-21 08:13:25.522869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.954 [2024-07-21 08:13:25.522941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.954 [2024-07-21 08:13:25.522958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.954 [2024-07-21 08:13:25.522983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.954 [2024-07-21 08:13:25.522996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.954 [2024-07-21 08:13:25.523087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.954 [2024-07-21 08:13:25.523094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 [2024-07-21 08:13:25.672556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.212 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 [2024-07-21 08:13:25.688861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.213 NULL1 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.213 Delay0 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4073371 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:16.213 08:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:16.213 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.213 [2024-07-21 08:13:25.763487] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:18.111 08:13:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.111 08:13:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.111 08:13:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 [2024-07-21 08:13:27.815661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7b160 is same with the state(5) to be set 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 starting I/O failed: -6 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 [2024-07-21 08:13:27.816421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f387800cff0 is same with the state(5) to be set 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.368 Write completed with error (sct=0, sc=8) 00:17:18.368 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Write completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:18.369 Read completed with error (sct=0, sc=8) 00:17:19.298 [2024-07-21 08:13:28.778437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c88a30 is same with the state(5) to be set 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 [2024-07-21 08:13:28.818417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f387800d310 is same with the state(5) to be set 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 [2024-07-21 08:13:28.820033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7a970 is same with the state(5) to be set 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 [2024-07-21 08:13:28.820356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7ae40 is same with the state(5) to be set 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Write completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 Read completed with error (sct=0, sc=8) 00:17:19.298 [2024-07-21 08:13:28.820546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7b480 is same with the state(5) to be set 00:17:19.298 Initializing NVMe Controllers 00:17:19.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.298 Controller IO queue size 128, less than required. 00:17:19.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:19.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:19.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:19.298 Initialization complete. Launching workers. 00:17:19.298 ======================================================== 00:17:19.298 Latency(us) 00:17:19.298 Device Information : IOPS MiB/s Average min max 00:17:19.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 178.74 0.09 958363.60 677.87 1012968.71 00:17:19.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.90 0.08 875534.58 340.68 1013739.87 00:17:19.298 ======================================================== 00:17:19.298 Total : 334.64 0.16 919775.60 340.68 1013739.87 00:17:19.298 00:17:19.298 [2024-07-21 08:13:28.821415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c88a30 (9): Bad file descriptor 00:17:19.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:19.299 08:13:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.299 08:13:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:17:19.299 08:13:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4073371 00:17:19.299 08:13:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4073371 00:17:19.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4073371) - No such process 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4073371 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4073371 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4073371 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:19.864 [2024-07-21 08:13:29.343291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4073787 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:19.864 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:19.864 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.864 [2024-07-21 08:13:29.408371] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:20.428 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:20.428 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:20.428 08:13:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:20.990 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:20.990 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:20.990 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:21.247 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:21.247 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:21.247 08:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:21.809 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:21.809 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:21.809 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.388 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:22.388 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:22.388 08:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.996 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:22.996 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:22.996 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:22.996 Initializing NVMe Controllers 00:17:22.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:22.996 Controller IO queue size 128, less than required. 00:17:22.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:22.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:22.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:22.996 Initialization complete. Launching workers. 00:17:22.996 ======================================================== 00:17:22.996 Latency(us) 00:17:22.996 Device Information : IOPS MiB/s Average min max 00:17:22.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003555.77 1000183.94 1011511.32 00:17:22.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004978.71 1000232.26 1042005.08 00:17:22.996 ======================================================== 00:17:22.996 Total : 256.00 0.12 1004267.24 1000183.94 1042005.08 00:17:22.996 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4073787 00:17:23.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4073787) - No such process 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4073787 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.253 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.253 rmmod nvme_tcp 00:17:23.512 rmmod nvme_fabrics 00:17:23.512 rmmod nvme_keyring 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4073343 ']' 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4073343 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4073343 ']' 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4073343 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4073343 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4073343' 00:17:23.512 killing process with pid 4073343 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4073343 00:17:23.512 08:13:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4073343 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.772 08:13:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.669 08:13:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.669 00:17:25.669 real 0m12.030s 00:17:25.669 user 0m27.387s 00:17:25.669 sys 0m2.896s 00:17:25.669 08:13:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.669 08:13:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:25.669 ************************************ 00:17:25.669 END TEST nvmf_delete_subsystem 00:17:25.669 ************************************ 00:17:25.669 08:13:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.669 08:13:35 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:25.669 08:13:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.669 08:13:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.669 08:13:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.669 ************************************ 00:17:25.669 START TEST nvmf_ns_masking 00:17:25.669 ************************************ 00:17:25.669 08:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:25.926 * Looking for test storage... 00:17:25.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.926 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2232a9e3-98dc-4de7-9ed6-596dc8b9e245 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=41e9bd1b-7761-4ce3-b2be-e8b4eb1584b8 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=75755190-d35c-4099-8de8-ba50c2864722 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.927 08:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:27.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:27.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:27.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:27.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.824 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.825 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:28.082 00:17:28.082 --- 10.0.0.2 ping statistics --- 00:17:28.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.082 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:28.082 00:17:28.082 --- 10.0.0.1 ping statistics --- 00:17:28.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.082 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4076231 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4076231 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4076231 ']' 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.082 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 [2024-07-21 08:13:37.583263] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:28.082 [2024-07-21 08:13:37.583363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.082 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.082 [2024-07-21 08:13:37.650923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.339 [2024-07-21 08:13:37.740315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.339 [2024-07-21 08:13:37.740378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.339 [2024-07-21 08:13:37.740405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.339 [2024-07-21 08:13:37.740419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.339 [2024-07-21 08:13:37.740430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.339 [2024-07-21 08:13:37.740461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.339 08:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:28.596 [2024-07-21 08:13:38.130053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.596 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:28.596 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:28.596 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:28.853 Malloc1 00:17:28.853 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:29.110 Malloc2 00:17:29.110 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:29.368 08:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:29.625 08:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.882 [2024-07-21 08:13:39.388969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.882 08:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:29.882 08:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 75755190-d35c-4099-8de8-ba50c2864722 -a 10.0.0.2 -s 4420 -i 4 00:17:30.139 08:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.139 08:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:30.139 08:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.139 08:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:30.139 08:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:32.036 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:32.293 [ 0]:0x1 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f2619564c5844d2ba454886ae21b381 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f2619564c5844d2ba454886ae21b381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.293 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:32.550 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:32.550 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.550 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:32.550 [ 0]:0x1 00:17:32.550 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:32.550 08:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f2619564c5844d2ba454886ae21b381 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f2619564c5844d2ba454886ae21b381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:32.551 [ 1]:0x2 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.551 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.115 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:33.115 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:33.115 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 75755190-d35c-4099-8de8-ba50c2864722 -a 10.0.0.2 -s 4420 -i 4 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:33.373 08:13:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.894 08:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.894 [ 0]:0x2 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.894 [ 0]:0x1 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.894 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f2619564c5844d2ba454886ae21b381 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f2619564c5844d2ba454886ae21b381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.895 [ 1]:0x2 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.895 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.151 [ 0]:0x2 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.151 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.408 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:36.408 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.408 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:36.408 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.408 08:13:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:36.665 08:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:36.665 08:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 75755190-d35c-4099-8de8-ba50c2864722 -a 10.0.0.2 -s 4420 -i 4 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:36.923 08:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.818 [ 0]:0x1 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f2619564c5844d2ba454886ae21b381 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f2619564c5844d2ba454886ae21b381 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:38.818 [ 1]:0x2 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.818 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.075 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.331 [ 0]:0x2 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:39.331 08:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:39.617 [2024-07-21 08:13:49.002360] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:39.617 request: 00:17:39.617 { 00:17:39.617 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.617 "nsid": 2, 00:17:39.617 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.617 "method": "nvmf_ns_remove_host", 00:17:39.617 "req_id": 1 00:17:39.617 } 00:17:39.617 Got JSON-RPC error response 00:17:39.617 response: 00:17:39.617 { 00:17:39.617 "code": -32602, 00:17:39.617 "message": "Invalid parameters" 00:17:39.617 } 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.617 [ 0]:0x2 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8a444fcdda24d5fa94df53d1e40813e 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8a444fcdda24d5fa94df53d1e40813e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:39.617 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4077733 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4077733 /var/tmp/host.sock 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4077733 ']' 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:39.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.875 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:39.875 [2024-07-21 08:13:49.355363] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:39.875 [2024-07-21 08:13:49.355462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077733 ] 00:17:39.875 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.876 [2024-07-21 08:13:49.418112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.132 [2024-07-21 08:13:49.511920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.389 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.389 08:13:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:40.389 08:13:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.646 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:40.904 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2232a9e3-98dc-4de7-9ed6-596dc8b9e245 00:17:40.904 08:13:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:40.904 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2232A9E398DC4DE79ED6596DC8B9E245 -i 00:17:41.160 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 41e9bd1b-7761-4ce3-b2be-e8b4eb1584b8 00:17:41.160 08:13:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:41.160 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 41E9BD1B77614CE3B2BEE8B4EB1584B8 -i 00:17:41.415 08:13:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:41.671 08:13:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:41.926 08:13:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:41.926 08:13:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:42.182 nvme0n1 00:17:42.182 08:13:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:42.182 08:13:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:42.438 nvme1n2 00:17:42.695 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:42.695 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:42.695 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:42.695 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:42.695 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:42.951 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:42.951 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:42.951 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:42.951 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2232a9e3-98dc-4de7-9ed6-596dc8b9e245 == \2\2\3\2\a\9\e\3\-\9\8\d\c\-\4\d\e\7\-\9\e\d\6\-\5\9\6\d\c\8\b\9\e\2\4\5 ]] 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 41e9bd1b-7761-4ce3-b2be-e8b4eb1584b8 == \4\1\e\9\b\d\1\b\-\7\7\6\1\-\4\c\e\3\-\b\2\b\e\-\e\8\b\4\e\b\1\5\8\4\b\8 ]] 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4077733 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4077733 ']' 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4077733 00:17:43.208 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4077733 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4077733' 00:17:43.464 killing process with pid 4077733 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4077733 00:17:43.464 08:13:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4077733 00:17:43.720 08:13:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.977 rmmod nvme_tcp 00:17:43.977 rmmod nvme_fabrics 00:17:43.977 rmmod nvme_keyring 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4076231 ']' 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4076231 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4076231 ']' 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4076231 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.977 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4076231 00:17:44.233 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:44.233 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:44.233 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4076231' 00:17:44.233 killing process with pid 4076231 00:17:44.233 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4076231 00:17:44.233 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4076231 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.493 08:13:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.395 08:13:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.395 00:17:46.395 real 0m20.677s 00:17:46.395 user 0m26.725s 00:17:46.395 sys 0m4.031s 00:17:46.395 08:13:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.395 08:13:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.395 ************************************ 00:17:46.395 END TEST nvmf_ns_masking 00:17:46.395 ************************************ 00:17:46.395 08:13:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:46.395 08:13:55 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:17:46.395 08:13:55 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:46.395 08:13:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:46.395 08:13:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.395 08:13:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.395 ************************************ 00:17:46.395 START TEST nvmf_nvme_cli 00:17:46.395 ************************************ 00:17:46.395 08:13:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:46.653 * Looking for test storage... 00:17:46.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.653 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.654 08:13:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.555 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.556 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.556 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.556 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.556 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:17:48.556 00:17:48.556 --- 10.0.0.2 ping statistics --- 00:17:48.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.556 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:17:48.556 00:17:48.556 --- 10.0.0.1 ping statistics --- 00:17:48.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.556 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4080219 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4080219 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4080219 ']' 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.556 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:48.814 [2024-07-21 08:13:58.219389] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:48.814 [2024-07-21 08:13:58.219488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.814 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.814 [2024-07-21 08:13:58.289254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.814 [2024-07-21 08:13:58.383874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.814 [2024-07-21 08:13:58.383940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.814 [2024-07-21 08:13:58.383968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.814 [2024-07-21 08:13:58.383983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.814 [2024-07-21 08:13:58.383999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.814 [2024-07-21 08:13:58.384057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.814 [2024-07-21 08:13:58.384111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.814 [2024-07-21 08:13:58.384173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.814 [2024-07-21 08:13:58.384175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 [2024-07-21 08:13:58.547569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 Malloc0 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 Malloc1 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 [2024-07-21 08:13:58.633002] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.072 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:49.329 00:17:49.329 Discovery Log Number of Records 2, Generation counter 2 00:17:49.329 =====Discovery Log Entry 0====== 00:17:49.329 trtype: tcp 00:17:49.329 adrfam: ipv4 00:17:49.329 subtype: current discovery subsystem 00:17:49.329 treq: not required 00:17:49.329 portid: 0 00:17:49.329 trsvcid: 4420 00:17:49.329 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:49.329 traddr: 10.0.0.2 00:17:49.329 eflags: explicit discovery connections, duplicate discovery information 00:17:49.329 sectype: none 00:17:49.329 =====Discovery Log Entry 1====== 00:17:49.329 trtype: tcp 00:17:49.330 adrfam: ipv4 00:17:49.330 subtype: nvme subsystem 00:17:49.330 treq: not required 00:17:49.330 portid: 0 00:17:49.330 trsvcid: 4420 00:17:49.330 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:49.330 traddr: 10.0.0.2 00:17:49.330 eflags: none 00:17:49.330 sectype: none 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:49.330 08:13:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:49.894 08:13:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:51.828 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:51.829 /dev/nvme0n1 ]] 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.829 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.086 rmmod nvme_tcp 00:17:52.086 rmmod nvme_fabrics 00:17:52.086 rmmod nvme_keyring 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4080219 ']' 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4080219 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4080219 ']' 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4080219 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4080219 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4080219' 00:17:52.086 killing process with pid 4080219 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4080219 00:17:52.086 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4080219 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.343 08:14:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.284 08:14:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.284 00:17:54.284 real 0m7.880s 00:17:54.284 user 0m14.100s 00:17:54.284 sys 0m2.126s 00:17:54.284 08:14:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.284 08:14:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.284 ************************************ 00:17:54.284 END TEST nvmf_nvme_cli 00:17:54.284 ************************************ 00:17:54.285 08:14:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:54.285 08:14:03 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:17:54.285 08:14:03 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.285 08:14:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:54.285 08:14:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.285 08:14:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.542 ************************************ 00:17:54.542 START TEST nvmf_vfio_user 00:17:54.542 ************************************ 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.542 * Looking for test storage... 00:17:54.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.542 08:14:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4081019 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4081019' 00:17:54.542 Process pid: 4081019 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4081019 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4081019 ']' 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.542 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:54.542 [2024-07-21 08:14:04.051805] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:54.542 [2024-07-21 08:14:04.051882] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.542 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.542 [2024-07-21 08:14:04.109779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.799 [2024-07-21 08:14:04.197758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.799 [2024-07-21 08:14:04.197810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.799 [2024-07-21 08:14:04.197823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.799 [2024-07-21 08:14:04.197835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.799 [2024-07-21 08:14:04.197845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.799 [2024-07-21 08:14:04.200633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.799 [2024-07-21 08:14:04.200706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.799 [2024-07-21 08:14:04.200754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.799 [2024-07-21 08:14:04.200757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.799 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.799 08:14:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:54.799 08:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:55.726 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:55.983 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:55.983 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:55.983 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:55.983 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:55.983 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:56.240 Malloc1 00:17:56.240 08:14:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:56.803 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:56.803 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:57.060 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.060 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:57.060 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:57.317 Malloc2 00:17:57.317 08:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:57.573 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:57.830 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:58.087 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:58.087 [2024-07-21 08:14:07.677880] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:17:58.087 [2024-07-21 08:14:07.677944] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081446 ] 00:17:58.087 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.087 [2024-07-21 08:14:07.710094] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:58.345 [2024-07-21 08:14:07.719073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.345 [2024-07-21 08:14:07.719116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2a38c81000 00:17:58.345 [2024-07-21 08:14:07.720062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.721057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.722062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.723067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.724075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.725075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.726079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.727085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.345 [2024-07-21 08:14:07.728092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.345 [2024-07-21 08:14:07.728112] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2a37a35000 00:17:58.345 [2024-07-21 08:14:07.729231] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.345 [2024-07-21 08:14:07.744853] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:58.345 [2024-07-21 08:14:07.744890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:58.345 [2024-07-21 08:14:07.747211] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:58.345 [2024-07-21 08:14:07.747268] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:58.345 [2024-07-21 08:14:07.747356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:58.345 [2024-07-21 08:14:07.747383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:58.345 [2024-07-21 08:14:07.747392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:58.345 [2024-07-21 08:14:07.748208] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:58.345 [2024-07-21 08:14:07.748227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:58.345 [2024-07-21 08:14:07.748238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:58.345 [2024-07-21 08:14:07.749214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:58.345 [2024-07-21 08:14:07.749232] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:58.345 [2024-07-21 08:14:07.749245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.750639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:58.345 [2024-07-21 08:14:07.750691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.751220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:58.345 [2024-07-21 08:14:07.751238] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:58.345 [2024-07-21 08:14:07.751247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.751258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.751367] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:58.345 [2024-07-21 08:14:07.751375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.751383] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:58.345 [2024-07-21 08:14:07.752239] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:58.345 [2024-07-21 08:14:07.753232] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:58.345 [2024-07-21 08:14:07.754237] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:58.345 [2024-07-21 08:14:07.755235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:58.345 [2024-07-21 08:14:07.755323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:58.345 [2024-07-21 08:14:07.759638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:58.345 [2024-07-21 08:14:07.759658] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:58.345 [2024-07-21 08:14:07.759667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.759691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:58.345 [2024-07-21 08:14:07.759704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.759727] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.345 [2024-07-21 08:14:07.759736] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.345 [2024-07-21 08:14:07.759755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.345 [2024-07-21 08:14:07.759813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:58.345 [2024-07-21 08:14:07.759829] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:58.345 [2024-07-21 08:14:07.759840] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:58.345 [2024-07-21 08:14:07.759849] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:58.345 [2024-07-21 08:14:07.759856] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:58.345 [2024-07-21 08:14:07.759867] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:58.345 [2024-07-21 08:14:07.759875] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:58.345 [2024-07-21 08:14:07.759883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.759904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.759918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:58.345 [2024-07-21 08:14:07.759946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:58.345 [2024-07-21 08:14:07.759967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.345 [2024-07-21 08:14:07.759981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.345 [2024-07-21 08:14:07.759992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.345 [2024-07-21 08:14:07.760004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.345 [2024-07-21 08:14:07.760012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:58.345 [2024-07-21 08:14:07.760053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:58.345 [2024-07-21 08:14:07.760063] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:58.345 [2024-07-21 08:14:07.760071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.345 [2024-07-21 08:14:07.760118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:58.345 [2024-07-21 08:14:07.760182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:58.345 [2024-07-21 08:14:07.760211] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:58.345 [2024-07-21 08:14:07.760219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:58.346 [2024-07-21 08:14:07.760228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760259] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:58.346 [2024-07-21 08:14:07.760278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.346 [2024-07-21 08:14:07.760311] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.346 [2024-07-21 08:14:07.760320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760390] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.346 [2024-07-21 08:14:07.760398] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.346 [2024-07-21 08:14:07.760407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760489] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:58.346 [2024-07-21 08:14:07.760496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:58.346 [2024-07-21 08:14:07.760504] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:58.346 [2024-07-21 08:14:07.760529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760700] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:58.346 [2024-07-21 08:14:07.760711] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:58.346 [2024-07-21 08:14:07.760717] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:58.346 [2024-07-21 08:14:07.760723] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:58.346 [2024-07-21 08:14:07.760732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:58.346 [2024-07-21 08:14:07.760743] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:58.346 [2024-07-21 08:14:07.760751] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:58.346 [2024-07-21 08:14:07.760760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760771] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:58.346 [2024-07-21 08:14:07.760779] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.346 [2024-07-21 08:14:07.760788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760800] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:58.346 [2024-07-21 08:14:07.760807] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:58.346 [2024-07-21 08:14:07.760816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:58.346 [2024-07-21 08:14:07.760828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:58.346 [2024-07-21 08:14:07.760878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:58.346 ===================================================== 00:17:58.346 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:58.346 ===================================================== 00:17:58.346 Controller Capabilities/Features 00:17:58.346 ================================ 00:17:58.346 Vendor ID: 4e58 00:17:58.346 Subsystem Vendor ID: 4e58 00:17:58.346 Serial Number: SPDK1 00:17:58.346 Model Number: SPDK bdev Controller 00:17:58.346 Firmware Version: 24.09 00:17:58.346 Recommended Arb Burst: 6 00:17:58.346 IEEE OUI Identifier: 8d 6b 50 00:17:58.346 Multi-path I/O 00:17:58.346 May have multiple subsystem ports: Yes 00:17:58.346 May have multiple controllers: Yes 00:17:58.346 Associated with SR-IOV VF: No 00:17:58.346 Max Data Transfer Size: 131072 00:17:58.346 Max Number of Namespaces: 32 00:17:58.346 Max Number of I/O Queues: 127 00:17:58.346 NVMe Specification Version (VS): 1.3 00:17:58.346 NVMe Specification Version (Identify): 1.3 00:17:58.346 Maximum Queue Entries: 256 00:17:58.346 Contiguous Queues Required: Yes 00:17:58.346 Arbitration Mechanisms Supported 00:17:58.346 Weighted Round Robin: Not Supported 00:17:58.346 Vendor Specific: Not Supported 00:17:58.346 Reset Timeout: 15000 ms 00:17:58.346 Doorbell Stride: 4 bytes 00:17:58.346 NVM Subsystem Reset: Not Supported 00:17:58.346 Command Sets Supported 00:17:58.346 NVM Command Set: Supported 00:17:58.346 Boot Partition: Not Supported 00:17:58.346 Memory Page Size Minimum: 4096 bytes 00:17:58.346 Memory Page Size Maximum: 4096 bytes 00:17:58.346 Persistent Memory Region: Not Supported 00:17:58.346 Optional Asynchronous Events Supported 00:17:58.346 Namespace Attribute Notices: Supported 00:17:58.346 Firmware Activation Notices: Not Supported 00:17:58.346 ANA Change Notices: Not Supported 00:17:58.346 PLE Aggregate Log Change Notices: Not Supported 00:17:58.346 LBA Status Info Alert Notices: Not Supported 00:17:58.346 EGE Aggregate Log Change Notices: Not Supported 00:17:58.346 Normal NVM Subsystem Shutdown event: Not Supported 00:17:58.346 Zone Descriptor Change Notices: Not Supported 00:17:58.346 Discovery Log Change Notices: Not Supported 00:17:58.346 Controller Attributes 00:17:58.346 128-bit Host Identifier: Supported 00:17:58.346 Non-Operational Permissive Mode: Not Supported 00:17:58.346 NVM Sets: Not Supported 00:17:58.346 Read Recovery Levels: Not Supported 00:17:58.346 Endurance Groups: Not Supported 00:17:58.346 Predictable Latency Mode: Not Supported 00:17:58.346 Traffic Based Keep ALive: Not Supported 00:17:58.346 Namespace Granularity: Not Supported 00:17:58.346 SQ Associations: Not Supported 00:17:58.346 UUID List: Not Supported 00:17:58.346 Multi-Domain Subsystem: Not Supported 00:17:58.346 Fixed Capacity Management: Not Supported 00:17:58.346 Variable Capacity Management: Not Supported 00:17:58.346 Delete Endurance Group: Not Supported 00:17:58.346 Delete NVM Set: Not Supported 00:17:58.346 Extended LBA Formats Supported: Not Supported 00:17:58.346 Flexible Data Placement Supported: Not Supported 00:17:58.346 00:17:58.346 Controller Memory Buffer Support 00:17:58.346 ================================ 00:17:58.346 Supported: No 00:17:58.346 00:17:58.346 Persistent Memory Region Support 00:17:58.346 ================================ 00:17:58.346 Supported: No 00:17:58.346 00:17:58.346 Admin Command Set Attributes 00:17:58.346 ============================ 00:17:58.346 Security Send/Receive: Not Supported 00:17:58.346 Format NVM: Not Supported 00:17:58.346 Firmware Activate/Download: Not Supported 00:17:58.346 Namespace Management: Not Supported 00:17:58.346 Device Self-Test: Not Supported 00:17:58.347 Directives: Not Supported 00:17:58.347 NVMe-MI: Not Supported 00:17:58.347 Virtualization Management: Not Supported 00:17:58.347 Doorbell Buffer Config: Not Supported 00:17:58.347 Get LBA Status Capability: Not Supported 00:17:58.347 Command & Feature Lockdown Capability: Not Supported 00:17:58.347 Abort Command Limit: 4 00:17:58.347 Async Event Request Limit: 4 00:17:58.347 Number of Firmware Slots: N/A 00:17:58.347 Firmware Slot 1 Read-Only: N/A 00:17:58.347 Firmware Activation Without Reset: N/A 00:17:58.347 Multiple Update Detection Support: N/A 00:17:58.347 Firmware Update Granularity: No Information Provided 00:17:58.347 Per-Namespace SMART Log: No 00:17:58.347 Asymmetric Namespace Access Log Page: Not Supported 00:17:58.347 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:58.347 Command Effects Log Page: Supported 00:17:58.347 Get Log Page Extended Data: Supported 00:17:58.347 Telemetry Log Pages: Not Supported 00:17:58.347 Persistent Event Log Pages: Not Supported 00:17:58.347 Supported Log Pages Log Page: May Support 00:17:58.347 Commands Supported & Effects Log Page: Not Supported 00:17:58.347 Feature Identifiers & Effects Log Page:May Support 00:17:58.347 NVMe-MI Commands & Effects Log Page: May Support 00:17:58.347 Data Area 4 for Telemetry Log: Not Supported 00:17:58.347 Error Log Page Entries Supported: 128 00:17:58.347 Keep Alive: Supported 00:17:58.347 Keep Alive Granularity: 10000 ms 00:17:58.347 00:17:58.347 NVM Command Set Attributes 00:17:58.347 ========================== 00:17:58.347 Submission Queue Entry Size 00:17:58.347 Max: 64 00:17:58.347 Min: 64 00:17:58.347 Completion Queue Entry Size 00:17:58.347 Max: 16 00:17:58.347 Min: 16 00:17:58.347 Number of Namespaces: 32 00:17:58.347 Compare Command: Supported 00:17:58.347 Write Uncorrectable Command: Not Supported 00:17:58.347 Dataset Management Command: Supported 00:17:58.347 Write Zeroes Command: Supported 00:17:58.347 Set Features Save Field: Not Supported 00:17:58.347 Reservations: Not Supported 00:17:58.347 Timestamp: Not Supported 00:17:58.347 Copy: Supported 00:17:58.347 Volatile Write Cache: Present 00:17:58.347 Atomic Write Unit (Normal): 1 00:17:58.347 Atomic Write Unit (PFail): 1 00:17:58.347 Atomic Compare & Write Unit: 1 00:17:58.347 Fused Compare & Write: Supported 00:17:58.347 Scatter-Gather List 00:17:58.347 SGL Command Set: Supported (Dword aligned) 00:17:58.347 SGL Keyed: Not Supported 00:17:58.347 SGL Bit Bucket Descriptor: Not Supported 00:17:58.347 SGL Metadata Pointer: Not Supported 00:17:58.347 Oversized SGL: Not Supported 00:17:58.347 SGL Metadata Address: Not Supported 00:17:58.347 SGL Offset: Not Supported 00:17:58.347 Transport SGL Data Block: Not Supported 00:17:58.347 Replay Protected Memory Block: Not Supported 00:17:58.347 00:17:58.347 Firmware Slot Information 00:17:58.347 ========================= 00:17:58.347 Active slot: 1 00:17:58.347 Slot 1 Firmware Revision: 24.09 00:17:58.347 00:17:58.347 00:17:58.347 Commands Supported and Effects 00:17:58.347 ============================== 00:17:58.347 Admin Commands 00:17:58.347 -------------- 00:17:58.347 Get Log Page (02h): Supported 00:17:58.347 Identify (06h): Supported 00:17:58.347 Abort (08h): Supported 00:17:58.347 Set Features (09h): Supported 00:17:58.347 Get Features (0Ah): Supported 00:17:58.347 Asynchronous Event Request (0Ch): Supported 00:17:58.347 Keep Alive (18h): Supported 00:17:58.347 I/O Commands 00:17:58.347 ------------ 00:17:58.347 Flush (00h): Supported LBA-Change 00:17:58.347 Write (01h): Supported LBA-Change 00:17:58.347 Read (02h): Supported 00:17:58.347 Compare (05h): Supported 00:17:58.347 Write Zeroes (08h): Supported LBA-Change 00:17:58.347 Dataset Management (09h): Supported LBA-Change 00:17:58.347 Copy (19h): Supported LBA-Change 00:17:58.347 00:17:58.347 Error Log 00:17:58.347 ========= 00:17:58.347 00:17:58.347 Arbitration 00:17:58.347 =========== 00:17:58.347 Arbitration Burst: 1 00:17:58.347 00:17:58.347 Power Management 00:17:58.347 ================ 00:17:58.347 Number of Power States: 1 00:17:58.347 Current Power State: Power State #0 00:17:58.347 Power State #0: 00:17:58.347 Max Power: 0.00 W 00:17:58.347 Non-Operational State: Operational 00:17:58.347 Entry Latency: Not Reported 00:17:58.347 Exit Latency: Not Reported 00:17:58.347 Relative Read Throughput: 0 00:17:58.347 Relative Read Latency: 0 00:17:58.347 Relative Write Throughput: 0 00:17:58.347 Relative Write Latency: 0 00:17:58.347 Idle Power: Not Reported 00:17:58.347 Active Power: Not Reported 00:17:58.347 Non-Operational Permissive Mode: Not Supported 00:17:58.347 00:17:58.347 Health Information 00:17:58.347 ================== 00:17:58.347 Critical Warnings: 00:17:58.347 Available Spare Space: OK 00:17:58.347 Temperature: OK 00:17:58.347 Device Reliability: OK 00:17:58.347 Read Only: No 00:17:58.347 Volatile Memory Backup: OK 00:17:58.347 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:58.347 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:58.347 Available Spare: 0% 00:17:58.347 Available Sp[2024-07-21 08:14:07.761026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:58.347 [2024-07-21 08:14:07.761043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:58.347 [2024-07-21 08:14:07.761087] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:58.347 [2024-07-21 08:14:07.761105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.347 [2024-07-21 08:14:07.761120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.347 [2024-07-21 08:14:07.761130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.347 [2024-07-21 08:14:07.761140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.347 [2024-07-21 08:14:07.761284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:58.347 [2024-07-21 08:14:07.761304] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:58.347 [2024-07-21 08:14:07.762280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.347 [2024-07-21 08:14:07.762349] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:58.347 [2024-07-21 08:14:07.762363] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:58.347 [2024-07-21 08:14:07.763288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:58.347 [2024-07-21 08:14:07.763309] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:58.347 [2024-07-21 08:14:07.763361] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:58.347 [2024-07-21 08:14:07.765326] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.347 are Threshold: 0% 00:17:58.347 Life Percentage Used: 0% 00:17:58.347 Data Units Read: 0 00:17:58.347 Data Units Written: 0 00:17:58.347 Host Read Commands: 0 00:17:58.347 Host Write Commands: 0 00:17:58.347 Controller Busy Time: 0 minutes 00:17:58.347 Power Cycles: 0 00:17:58.347 Power On Hours: 0 hours 00:17:58.347 Unsafe Shutdowns: 0 00:17:58.347 Unrecoverable Media Errors: 0 00:17:58.347 Lifetime Error Log Entries: 0 00:17:58.347 Warning Temperature Time: 0 minutes 00:17:58.347 Critical Temperature Time: 0 minutes 00:17:58.347 00:17:58.347 Number of Queues 00:17:58.347 ================ 00:17:58.347 Number of I/O Submission Queues: 127 00:17:58.347 Number of I/O Completion Queues: 127 00:17:58.347 00:17:58.347 Active Namespaces 00:17:58.347 ================= 00:17:58.347 Namespace ID:1 00:17:58.347 Error Recovery Timeout: Unlimited 00:17:58.347 Command Set Identifier: NVM (00h) 00:17:58.347 Deallocate: Supported 00:17:58.347 Deallocated/Unwritten Error: Not Supported 00:17:58.347 Deallocated Read Value: Unknown 00:17:58.347 Deallocate in Write Zeroes: Not Supported 00:17:58.347 Deallocated Guard Field: 0xFFFF 00:17:58.347 Flush: Supported 00:17:58.347 Reservation: Supported 00:17:58.347 Namespace Sharing Capabilities: Multiple Controllers 00:17:58.347 Size (in LBAs): 131072 (0GiB) 00:17:58.347 Capacity (in LBAs): 131072 (0GiB) 00:17:58.347 Utilization (in LBAs): 131072 (0GiB) 00:17:58.347 NGUID: 18F86681094349C89EA613053D5AC3FE 00:17:58.347 UUID: 18f86681-0943-49c8-9ea6-13053d5ac3fe 00:17:58.347 Thin Provisioning: Not Supported 00:17:58.347 Per-NS Atomic Units: Yes 00:17:58.347 Atomic Boundary Size (Normal): 0 00:17:58.347 Atomic Boundary Size (PFail): 0 00:17:58.347 Atomic Boundary Offset: 0 00:17:58.347 Maximum Single Source Range Length: 65535 00:17:58.347 Maximum Copy Length: 65535 00:17:58.347 Maximum Source Range Count: 1 00:17:58.347 NGUID/EUI64 Never Reused: No 00:17:58.347 Namespace Write Protected: No 00:17:58.348 Number of LBA Formats: 1 00:17:58.348 Current LBA Format: LBA Format #00 00:17:58.348 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:58.348 00:17:58.348 08:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.348 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.605 [2024-07-21 08:14:07.986428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.865 Initializing NVMe Controllers 00:18:03.865 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.865 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:03.865 Initialization complete. Launching workers. 00:18:03.865 ======================================================== 00:18:03.865 Latency(us) 00:18:03.865 Device Information : IOPS MiB/s Average min max 00:18:03.865 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34505.80 134.79 3708.91 1167.14 8553.63 00:18:03.865 ======================================================== 00:18:03.865 Total : 34505.80 134.79 3708.91 1167.14 8553.63 00:18:03.865 00:18:03.865 [2024-07-21 08:14:13.008177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.865 08:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.865 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.865 [2024-07-21 08:14:13.253330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.138 Initializing NVMe Controllers 00:18:09.138 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:09.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:09.138 Initialization complete. Launching workers. 00:18:09.138 ======================================================== 00:18:09.138 Latency(us) 00:18:09.138 Device Information : IOPS MiB/s Average min max 00:18:09.138 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16005.20 62.52 8005.71 6027.90 15824.20 00:18:09.138 ======================================================== 00:18:09.138 Total : 16005.20 62.52 8005.71 6027.90 15824.20 00:18:09.138 00:18:09.138 [2024-07-21 08:14:18.289212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.138 08:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:09.138 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.138 [2024-07-21 08:14:18.502221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.422 [2024-07-21 08:14:23.571005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.422 Initializing NVMe Controllers 00:18:14.422 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.423 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:14.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:14.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:14.423 Initialization complete. Launching workers. 00:18:14.423 Starting thread on core 2 00:18:14.423 Starting thread on core 3 00:18:14.423 Starting thread on core 1 00:18:14.423 08:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:14.423 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.423 [2024-07-21 08:14:23.879069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.754 [2024-07-21 08:14:26.947907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.754 Initializing NVMe Controllers 00:18:17.754 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.754 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.754 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:17.754 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:17.754 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:17.754 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:17.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:17.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:17.754 Initialization complete. Launching workers. 00:18:17.754 Starting thread on core 1 with urgent priority queue 00:18:17.754 Starting thread on core 2 with urgent priority queue 00:18:17.754 Starting thread on core 3 with urgent priority queue 00:18:17.754 Starting thread on core 0 with urgent priority queue 00:18:17.754 SPDK bdev Controller (SPDK1 ) core 0: 5241.00 IO/s 19.08 secs/100000 ios 00:18:17.754 SPDK bdev Controller (SPDK1 ) core 1: 5924.33 IO/s 16.88 secs/100000 ios 00:18:17.754 SPDK bdev Controller (SPDK1 ) core 2: 6180.00 IO/s 16.18 secs/100000 ios 00:18:17.754 SPDK bdev Controller (SPDK1 ) core 3: 6143.33 IO/s 16.28 secs/100000 ios 00:18:17.754 ======================================================== 00:18:17.754 00:18:17.754 08:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:17.754 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.754 [2024-07-21 08:14:27.240536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.754 Initializing NVMe Controllers 00:18:17.754 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.754 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.754 Namespace ID: 1 size: 0GB 00:18:17.754 Initialization complete. 00:18:17.754 INFO: using host memory buffer for IO 00:18:17.754 Hello world! 00:18:17.754 [2024-07-21 08:14:27.273070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.754 08:14:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:18.013 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.013 [2024-07-21 08:14:27.564049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:19.384 Initializing NVMe Controllers 00:18:19.384 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.384 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.384 Initialization complete. Launching workers. 00:18:19.384 submit (in ns) avg, min, max = 6633.8, 3508.9, 4013783.3 00:18:19.384 complete (in ns) avg, min, max = 26218.0, 2064.4, 5011174.4 00:18:19.384 00:18:19.384 Submit histogram 00:18:19.384 ================ 00:18:19.384 Range in us Cumulative Count 00:18:19.384 3.508 - 3.532: 0.0297% ( 4) 00:18:19.384 3.532 - 3.556: 0.3633% ( 45) 00:18:19.384 3.556 - 3.579: 1.2084% ( 114) 00:18:19.384 3.579 - 3.603: 3.5140% ( 311) 00:18:19.384 3.603 - 3.627: 7.6359% ( 556) 00:18:19.384 3.627 - 3.650: 14.2783% ( 896) 00:18:19.384 3.650 - 3.674: 21.6324% ( 992) 00:18:19.384 3.674 - 3.698: 29.7576% ( 1096) 00:18:19.384 3.698 - 3.721: 38.6241% ( 1196) 00:18:19.384 3.721 - 3.745: 45.8373% ( 973) 00:18:19.384 3.745 - 3.769: 52.2574% ( 866) 00:18:19.384 3.769 - 3.793: 57.7285% ( 738) 00:18:19.384 3.793 - 3.816: 62.1692% ( 599) 00:18:19.384 3.816 - 3.840: 66.4097% ( 572) 00:18:19.384 3.840 - 3.864: 70.3759% ( 535) 00:18:19.384 3.864 - 3.887: 74.2531% ( 523) 00:18:19.384 3.887 - 3.911: 77.8486% ( 485) 00:18:19.384 3.911 - 3.935: 81.1550% ( 446) 00:18:19.384 3.935 - 3.959: 84.0907% ( 396) 00:18:19.384 3.959 - 3.982: 86.6336% ( 343) 00:18:19.384 3.982 - 4.006: 88.6945% ( 278) 00:18:19.384 4.006 - 4.030: 90.3180% ( 219) 00:18:19.384 4.030 - 4.053: 91.7414% ( 192) 00:18:19.384 4.053 - 4.077: 92.9498% ( 163) 00:18:19.384 4.077 - 4.101: 94.0915% ( 154) 00:18:19.384 4.101 - 4.124: 94.8995% ( 109) 00:18:19.384 4.124 - 4.148: 95.4630% ( 76) 00:18:19.384 4.148 - 4.172: 95.9893% ( 71) 00:18:19.384 4.172 - 4.196: 96.3229% ( 45) 00:18:19.384 4.196 - 4.219: 96.5527% ( 31) 00:18:19.384 4.219 - 4.243: 96.7603% ( 28) 00:18:19.384 4.243 - 4.267: 96.8864% ( 17) 00:18:19.384 4.267 - 4.290: 97.0346% ( 20) 00:18:19.384 4.290 - 4.314: 97.1236% ( 12) 00:18:19.384 4.314 - 4.338: 97.1829% ( 8) 00:18:19.384 4.338 - 4.361: 97.2867% ( 14) 00:18:19.384 4.361 - 4.385: 97.3460% ( 8) 00:18:19.385 4.385 - 4.409: 97.4127% ( 9) 00:18:19.385 4.409 - 4.433: 97.4646% ( 7) 00:18:19.385 4.433 - 4.456: 97.5165% ( 7) 00:18:19.385 4.456 - 4.480: 97.5313% ( 2) 00:18:19.385 4.480 - 4.504: 97.5610% ( 4) 00:18:19.385 4.504 - 4.527: 97.5684% ( 1) 00:18:19.385 4.527 - 4.551: 97.5906% ( 3) 00:18:19.385 4.551 - 4.575: 97.5980% ( 1) 00:18:19.385 4.599 - 4.622: 97.6129% ( 2) 00:18:19.385 4.622 - 4.646: 97.6203% ( 1) 00:18:19.385 4.646 - 4.670: 97.6277% ( 1) 00:18:19.385 4.670 - 4.693: 97.6425% ( 2) 00:18:19.385 4.693 - 4.717: 97.6574% ( 2) 00:18:19.385 4.717 - 4.741: 97.6944% ( 5) 00:18:19.385 4.741 - 4.764: 97.7389% ( 6) 00:18:19.385 4.764 - 4.788: 97.7834% ( 6) 00:18:19.385 4.788 - 4.812: 97.8130% ( 4) 00:18:19.385 4.812 - 4.836: 97.8649% ( 7) 00:18:19.385 4.836 - 4.859: 97.9020% ( 5) 00:18:19.385 4.859 - 4.883: 97.9761% ( 10) 00:18:19.385 4.883 - 4.907: 98.0428% ( 9) 00:18:19.385 4.907 - 4.930: 98.1096% ( 9) 00:18:19.385 4.930 - 4.954: 98.1392% ( 4) 00:18:19.385 4.954 - 4.978: 98.1837% ( 6) 00:18:19.385 4.978 - 5.001: 98.2208% ( 5) 00:18:19.385 5.001 - 5.025: 98.2653% ( 6) 00:18:19.385 5.025 - 5.049: 98.3023% ( 5) 00:18:19.385 5.049 - 5.073: 98.3246% ( 3) 00:18:19.385 5.073 - 5.096: 98.3394% ( 2) 00:18:19.385 5.096 - 5.120: 98.3468% ( 1) 00:18:19.385 5.120 - 5.144: 98.3542% ( 1) 00:18:19.385 5.144 - 5.167: 98.3765% ( 3) 00:18:19.385 5.167 - 5.191: 98.3839% ( 1) 00:18:19.385 5.191 - 5.215: 98.3987% ( 2) 00:18:19.385 5.215 - 5.239: 98.4135% ( 2) 00:18:19.385 5.239 - 5.262: 98.4283% ( 2) 00:18:19.385 5.262 - 5.286: 98.4358% ( 1) 00:18:19.385 5.286 - 5.310: 98.4432% ( 1) 00:18:19.385 5.310 - 5.333: 98.4580% ( 2) 00:18:19.385 5.333 - 5.357: 98.4654% ( 1) 00:18:19.385 5.357 - 5.381: 98.4728% ( 1) 00:18:19.385 5.547 - 5.570: 98.4802% ( 1) 00:18:19.385 5.641 - 5.665: 98.4877% ( 1) 00:18:19.385 5.689 - 5.713: 98.4951% ( 1) 00:18:19.385 5.760 - 5.784: 98.5025% ( 1) 00:18:19.385 5.807 - 5.831: 98.5099% ( 1) 00:18:19.385 6.163 - 6.210: 98.5173% ( 1) 00:18:19.385 6.447 - 6.495: 98.5247% ( 1) 00:18:19.385 6.590 - 6.637: 98.5321% ( 1) 00:18:19.385 6.637 - 6.684: 98.5544% ( 3) 00:18:19.385 6.732 - 6.779: 98.5618% ( 1) 00:18:19.385 6.827 - 6.874: 98.5692% ( 1) 00:18:19.385 6.874 - 6.921: 98.5840% ( 2) 00:18:19.385 6.969 - 7.016: 98.5914% ( 1) 00:18:19.385 7.016 - 7.064: 98.5989% ( 1) 00:18:19.385 7.111 - 7.159: 98.6137% ( 2) 00:18:19.385 7.443 - 7.490: 98.6285% ( 2) 00:18:19.385 7.490 - 7.538: 98.6359% ( 1) 00:18:19.385 7.538 - 7.585: 98.6433% ( 1) 00:18:19.385 7.633 - 7.680: 98.6508% ( 1) 00:18:19.385 7.964 - 8.012: 98.6582% ( 1) 00:18:19.385 8.059 - 8.107: 98.6730% ( 2) 00:18:19.385 8.107 - 8.154: 98.6804% ( 1) 00:18:19.385 8.154 - 8.201: 98.6952% ( 2) 00:18:19.385 8.201 - 8.249: 98.7101% ( 2) 00:18:19.385 8.249 - 8.296: 98.7175% ( 1) 00:18:19.385 8.296 - 8.344: 98.7249% ( 1) 00:18:19.385 8.486 - 8.533: 98.7323% ( 1) 00:18:19.385 8.581 - 8.628: 98.7471% ( 2) 00:18:19.385 8.628 - 8.676: 98.7545% ( 1) 00:18:19.385 8.818 - 8.865: 98.7620% ( 1) 00:18:19.385 8.865 - 8.913: 98.7694% ( 1) 00:18:19.385 8.913 - 8.960: 98.7842% ( 2) 00:18:19.385 9.007 - 9.055: 98.7916% ( 1) 00:18:19.385 9.055 - 9.102: 98.7990% ( 1) 00:18:19.385 9.150 - 9.197: 98.8064% ( 1) 00:18:19.385 9.292 - 9.339: 98.8213% ( 2) 00:18:19.385 9.481 - 9.529: 98.8287% ( 1) 00:18:19.385 9.624 - 9.671: 98.8361% ( 1) 00:18:19.385 9.766 - 9.813: 98.8435% ( 1) 00:18:19.385 9.813 - 9.861: 98.8509% ( 1) 00:18:19.385 9.956 - 10.003: 98.8583% ( 1) 00:18:19.385 10.382 - 10.430: 98.8657% ( 1) 00:18:19.385 10.904 - 10.951: 98.8732% ( 1) 00:18:19.385 11.046 - 11.093: 98.8806% ( 1) 00:18:19.385 11.188 - 11.236: 98.8880% ( 1) 00:18:19.385 11.520 - 11.567: 98.8954% ( 1) 00:18:19.385 11.804 - 11.852: 98.9102% ( 2) 00:18:19.385 12.231 - 12.326: 98.9176% ( 1) 00:18:19.385 12.326 - 12.421: 98.9251% ( 1) 00:18:19.385 12.421 - 12.516: 98.9325% ( 1) 00:18:19.385 12.516 - 12.610: 98.9399% ( 1) 00:18:19.385 12.895 - 12.990: 98.9473% ( 1) 00:18:19.385 13.179 - 13.274: 98.9547% ( 1) 00:18:19.385 13.369 - 13.464: 98.9621% ( 1) 00:18:19.385 13.464 - 13.559: 98.9769% ( 2) 00:18:19.385 13.559 - 13.653: 98.9918% ( 2) 00:18:19.385 13.653 - 13.748: 98.9992% ( 1) 00:18:19.385 13.748 - 13.843: 99.0066% ( 1) 00:18:19.385 14.222 - 14.317: 99.0214% ( 2) 00:18:19.385 14.317 - 14.412: 99.0363% ( 2) 00:18:19.385 15.265 - 15.360: 99.0437% ( 1) 00:18:19.385 15.929 - 16.024: 99.0585% ( 2) 00:18:19.385 17.256 - 17.351: 99.0659% ( 1) 00:18:19.385 17.351 - 17.446: 99.0807% ( 2) 00:18:19.385 17.446 - 17.541: 99.0956% ( 2) 00:18:19.385 17.541 - 17.636: 99.1252% ( 4) 00:18:19.385 17.636 - 17.730: 99.1475% ( 3) 00:18:19.385 17.730 - 17.825: 99.2068% ( 8) 00:18:19.385 17.825 - 17.920: 99.2512% ( 6) 00:18:19.385 17.920 - 18.015: 99.2735% ( 3) 00:18:19.385 18.015 - 18.110: 99.3180% ( 6) 00:18:19.385 18.110 - 18.204: 99.3995% ( 11) 00:18:19.385 18.204 - 18.299: 99.4662% ( 9) 00:18:19.385 18.299 - 18.394: 99.5107% ( 6) 00:18:19.385 18.394 - 18.489: 99.5997% ( 12) 00:18:19.385 18.489 - 18.584: 99.6367% ( 5) 00:18:19.385 18.584 - 18.679: 99.6812% ( 6) 00:18:19.385 18.679 - 18.773: 99.7109% ( 4) 00:18:19.385 18.773 - 18.868: 99.7405% ( 4) 00:18:19.385 18.868 - 18.963: 99.7924% ( 7) 00:18:19.385 18.963 - 19.058: 99.8147% ( 3) 00:18:19.385 19.058 - 19.153: 99.8295% ( 2) 00:18:19.385 19.153 - 19.247: 99.8443% ( 2) 00:18:19.385 19.247 - 19.342: 99.8666% ( 3) 00:18:19.385 19.532 - 19.627: 99.8740% ( 1) 00:18:19.385 21.902 - 21.997: 99.8814% ( 1) 00:18:19.385 21.997 - 22.092: 99.8888% ( 1) 00:18:19.385 22.187 - 22.281: 99.8962% ( 1) 00:18:19.385 22.376 - 22.471: 99.9036% ( 1) 00:18:19.385 22.945 - 23.040: 99.9110% ( 1) 00:18:19.385 23.324 - 23.419: 99.9259% ( 2) 00:18:19.385 29.772 - 29.961: 99.9333% ( 1) 00:18:19.385 3980.705 - 4004.978: 99.9926% ( 8) 00:18:19.385 4004.978 - 4029.250: 100.0000% ( 1) 00:18:19.385 00:18:19.385 Complete histogram 00:18:19.385 ================== 00:18:19.385 Range in us Cumulative Count 00:18:19.385 2.062 - 2.074: 2.9431% ( 397) 00:18:19.385 2.074 - 2.086: 35.8885% ( 4444) 00:18:19.385 2.086 - 2.098: 41.8637% ( 806) 00:18:19.385 2.098 - 2.110: 47.3942% ( 746) 00:18:19.385 2.110 - 2.121: 57.4542% ( 1357) 00:18:19.385 2.121 - 2.133: 59.4633% ( 271) 00:18:19.385 2.133 - 2.145: 66.0983% ( 895) 00:18:19.385 2.145 - 2.157: 77.3445% ( 1517) 00:18:19.385 2.157 - 2.169: 78.4862% ( 154) 00:18:19.385 2.169 - 2.181: 82.1558% ( 495) 00:18:19.385 2.181 - 2.193: 86.1294% ( 536) 00:18:19.385 2.193 - 2.204: 87.0561% ( 125) 00:18:19.385 2.204 - 2.216: 88.3164% ( 170) 00:18:19.385 2.216 - 2.228: 91.6821% ( 454) 00:18:19.385 2.228 - 2.240: 93.0907% ( 190) 00:18:19.385 2.240 - 2.252: 93.9210% ( 112) 00:18:19.385 2.252 - 2.264: 94.6327% ( 96) 00:18:19.385 2.264 - 2.276: 94.8328% ( 27) 00:18:19.385 2.276 - 2.287: 95.0626% ( 31) 00:18:19.385 2.287 - 2.299: 95.4852% ( 57) 00:18:19.385 2.299 - 2.311: 95.8262% ( 46) 00:18:19.385 2.311 - 2.323: 95.9671% ( 19) 00:18:19.385 2.323 - 2.335: 96.0338% ( 9) 00:18:19.385 2.335 - 2.347: 96.1079% ( 10) 00:18:19.385 2.347 - 2.359: 96.2933% ( 25) 00:18:19.385 2.359 - 2.370: 96.5824% ( 39) 00:18:19.385 2.370 - 2.382: 96.9753% ( 53) 00:18:19.385 2.382 - 2.394: 97.3237% ( 47) 00:18:19.385 2.394 - 2.406: 97.5165% ( 26) 00:18:19.385 2.406 - 2.418: 97.7760% ( 35) 00:18:19.385 2.418 - 2.430: 97.9242% ( 20) 00:18:19.385 2.430 - 2.441: 98.0503% ( 17) 00:18:19.385 2.441 - 2.453: 98.1392% ( 12) 00:18:19.385 2.453 - 2.465: 98.2134% ( 10) 00:18:19.385 2.465 - 2.477: 98.2727% ( 8) 00:18:19.385 2.477 - 2.489: 98.2875% ( 2) 00:18:19.385 2.489 - 2.501: 98.3023% ( 2) 00:18:19.385 2.501 - 2.513: 98.3171% ( 2) 00:18:19.385 2.513 - 2.524: 98.3246% ( 1) 00:18:19.385 2.524 - 2.536: 98.3320% ( 1) 00:18:19.385 2.536 - 2.548: 98.3468% ( 2) 00:18:19.385 2.548 - 2.560: 98.3616% ( 2) 00:18:19.385 2.560 - 2.572: 98.3690% ( 1) 00:18:19.385 2.584 - 2.596: 98.3765% ( 1) 00:18:19.385 2.619 - 2.631: 98.3913% ( 2) 00:18:19.385 2.631 - 2.643: 98.3987% ( 1) 00:18:19.385 2.643 - 2.655: 98.4061% ( 1) 00:18:19.385 2.726 - 2.738: 98.4135% ( 1) 00:18:19.385 3.058 - 3.081: 98.4209% ( 1) 00:18:19.385 3.413 - 3.437: 98.4283% ( 1) 00:18:19.385 3.437 - 3.461: 98.4432% ( 2) 00:18:19.385 3.461 - 3.484: 98.4580% ( 2) 00:18:19.385 3.484 - 3.508: 98.4728% ( 2) 00:18:19.385 3.508 - 3.532: 98.4951% ( 3) 00:18:19.385 3.556 - 3.579: 98.5025% ( 1) 00:18:19.385 3.579 - 3.603: 98.5099% ( 1) 00:18:19.385 3.603 - 3.627: 98.5173% ( 1) 00:18:19.385 3.627 - 3.650: 98.5247% ( 1) 00:18:19.385 3.650 - 3.674: 98.5470% ( 3) 00:18:19.385 3.674 - 3.698: 98.5618% ( 2) 00:18:19.385 3.721 - 3.745: 98.5692% ( 1) 00:18:19.385 3.769 - 3.793: 98.5914% ( 3) 00:18:19.385 3.793 - 3.816: 98.6137% ( 3) 00:18:19.385 3.816 - 3.840: 98.6211% ( 1) 00:18:19.386 3.864 - 3.887: 98.6285% ( 1) 00:18:19.386 3.959 - 3.982: 98.6508% ( 3) 00:18:19.386 4.053 - 4.077: 98.6582% ( 1) 00:18:19.386 4.409 - 4.433: 98.6656% ( 1) 00:18:19.386 4.883 - 4.907: 98.6730% ( 1) 00:18:19.386 5.001 - 5.025: 98.6804% ( 1) 00:18:19.386 5.049 - 5.073: 98.6878% ( 1) 00:18:19.386 5.310 - 5.333: 98.7026% ( 2) 00:18:19.386 5.381 - 5.404: 98.7101% ( 1) 00:18:19.386 5.523 - 5.547: 98.7175% ( 1) 00:18:19.386 5.618 - 5.641: 98.7249% ( 1) 00:18:19.386 5.689 - 5.713: 98.7323% ( 1) 00:18:19.386 5.736 - 5.760: 98.7471% ( 2) 00:18:19.386 5.926 - 5.950: 98.7545% ( 1) 00:18:19.386 5.950 - 5.973: 98.7620% ( 1) 00:18:19.386 5.997 - 6.021: 9[2024-07-21 08:14:28.589176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:19.386 8.7768% ( 2) 00:18:19.386 6.068 - 6.116: 98.7842% ( 1) 00:18:19.386 6.258 - 6.305: 98.7916% ( 1) 00:18:19.386 6.305 - 6.353: 98.7990% ( 1) 00:18:19.386 6.353 - 6.400: 98.8064% ( 1) 00:18:19.386 6.921 - 6.969: 98.8138% ( 1) 00:18:19.386 6.969 - 7.016: 98.8213% ( 1) 00:18:19.386 7.064 - 7.111: 98.8361% ( 2) 00:18:19.386 7.490 - 7.538: 98.8435% ( 1) 00:18:19.386 7.727 - 7.775: 98.8509% ( 1) 00:18:19.386 8.296 - 8.344: 98.8583% ( 1) 00:18:19.386 8.439 - 8.486: 98.8657% ( 1) 00:18:19.386 9.007 - 9.055: 98.8732% ( 1) 00:18:19.386 12.041 - 12.089: 98.8806% ( 1) 00:18:19.386 15.550 - 15.644: 98.8880% ( 1) 00:18:19.386 15.739 - 15.834: 98.9176% ( 4) 00:18:19.386 15.834 - 15.929: 98.9473% ( 4) 00:18:19.386 15.929 - 16.024: 98.9769% ( 4) 00:18:19.386 16.024 - 16.119: 98.9844% ( 1) 00:18:19.386 16.119 - 16.213: 99.0214% ( 5) 00:18:19.386 16.213 - 16.308: 99.0511% ( 4) 00:18:19.386 16.308 - 16.403: 99.0881% ( 5) 00:18:19.386 16.403 - 16.498: 99.1400% ( 7) 00:18:19.386 16.498 - 16.593: 99.1993% ( 8) 00:18:19.386 16.687 - 16.782: 99.2290% ( 4) 00:18:19.386 16.782 - 16.877: 99.2587% ( 4) 00:18:19.386 16.877 - 16.972: 99.2957% ( 5) 00:18:19.386 16.972 - 17.067: 99.3031% ( 1) 00:18:19.386 17.067 - 17.161: 99.3328% ( 4) 00:18:19.386 17.161 - 17.256: 99.3476% ( 2) 00:18:19.386 17.351 - 17.446: 99.3624% ( 2) 00:18:19.386 17.541 - 17.636: 99.3699% ( 1) 00:18:19.386 17.636 - 17.730: 99.3773% ( 1) 00:18:19.386 17.825 - 17.920: 99.3921% ( 2) 00:18:19.386 33.564 - 33.754: 99.3995% ( 1) 00:18:19.386 2852.030 - 2864.166: 99.4069% ( 1) 00:18:19.386 3980.705 - 4004.978: 99.9481% ( 73) 00:18:19.386 4004.978 - 4029.250: 99.9926% ( 6) 00:18:19.386 5000.154 - 5024.427: 100.0000% ( 1) 00:18:19.386 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.386 [ 00:18:19.386 { 00:18:19.386 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.386 "subtype": "Discovery", 00:18:19.386 "listen_addresses": [], 00:18:19.386 "allow_any_host": true, 00:18:19.386 "hosts": [] 00:18:19.386 }, 00:18:19.386 { 00:18:19.386 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.386 "subtype": "NVMe", 00:18:19.386 "listen_addresses": [ 00:18:19.386 { 00:18:19.386 "trtype": "VFIOUSER", 00:18:19.386 "adrfam": "IPv4", 00:18:19.386 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.386 "trsvcid": "0" 00:18:19.386 } 00:18:19.386 ], 00:18:19.386 "allow_any_host": true, 00:18:19.386 "hosts": [], 00:18:19.386 "serial_number": "SPDK1", 00:18:19.386 "model_number": "SPDK bdev Controller", 00:18:19.386 "max_namespaces": 32, 00:18:19.386 "min_cntlid": 1, 00:18:19.386 "max_cntlid": 65519, 00:18:19.386 "namespaces": [ 00:18:19.386 { 00:18:19.386 "nsid": 1, 00:18:19.386 "bdev_name": "Malloc1", 00:18:19.386 "name": "Malloc1", 00:18:19.386 "nguid": "18F86681094349C89EA613053D5AC3FE", 00:18:19.386 "uuid": "18f86681-0943-49c8-9ea6-13053d5ac3fe" 00:18:19.386 } 00:18:19.386 ] 00:18:19.386 }, 00:18:19.386 { 00:18:19.386 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.386 "subtype": "NVMe", 00:18:19.386 "listen_addresses": [ 00:18:19.386 { 00:18:19.386 "trtype": "VFIOUSER", 00:18:19.386 "adrfam": "IPv4", 00:18:19.386 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.386 "trsvcid": "0" 00:18:19.386 } 00:18:19.386 ], 00:18:19.386 "allow_any_host": true, 00:18:19.386 "hosts": [], 00:18:19.386 "serial_number": "SPDK2", 00:18:19.386 "model_number": "SPDK bdev Controller", 00:18:19.386 "max_namespaces": 32, 00:18:19.386 "min_cntlid": 1, 00:18:19.386 "max_cntlid": 65519, 00:18:19.386 "namespaces": [ 00:18:19.386 { 00:18:19.386 "nsid": 1, 00:18:19.386 "bdev_name": "Malloc2", 00:18:19.386 "name": "Malloc2", 00:18:19.386 "nguid": "CC06231CB006426E92E82062E31244A1", 00:18:19.386 "uuid": "cc06231c-b006-426e-92e8-2062e31244a1" 00:18:19.386 } 00:18:19.386 ] 00:18:19.386 } 00:18:19.386 ] 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4083961 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:19.386 08:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:19.386 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.643 [2024-07-21 08:14:29.025129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:19.643 Malloc3 00:18:19.643 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:19.901 [2024-07-21 08:14:29.386685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:19.901 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.901 Asynchronous Event Request test 00:18:19.901 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.901 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.901 Registering asynchronous event callbacks... 00:18:19.901 Starting namespace attribute notice tests for all controllers... 00:18:19.901 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:19.901 aer_cb - Changed Namespace 00:18:19.901 Cleaning up... 00:18:20.161 [ 00:18:20.161 { 00:18:20.161 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:20.161 "subtype": "Discovery", 00:18:20.161 "listen_addresses": [], 00:18:20.161 "allow_any_host": true, 00:18:20.161 "hosts": [] 00:18:20.161 }, 00:18:20.161 { 00:18:20.161 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:20.161 "subtype": "NVMe", 00:18:20.161 "listen_addresses": [ 00:18:20.161 { 00:18:20.161 "trtype": "VFIOUSER", 00:18:20.161 "adrfam": "IPv4", 00:18:20.161 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:20.161 "trsvcid": "0" 00:18:20.161 } 00:18:20.161 ], 00:18:20.161 "allow_any_host": true, 00:18:20.161 "hosts": [], 00:18:20.161 "serial_number": "SPDK1", 00:18:20.161 "model_number": "SPDK bdev Controller", 00:18:20.161 "max_namespaces": 32, 00:18:20.161 "min_cntlid": 1, 00:18:20.161 "max_cntlid": 65519, 00:18:20.161 "namespaces": [ 00:18:20.161 { 00:18:20.161 "nsid": 1, 00:18:20.161 "bdev_name": "Malloc1", 00:18:20.161 "name": "Malloc1", 00:18:20.161 "nguid": "18F86681094349C89EA613053D5AC3FE", 00:18:20.161 "uuid": "18f86681-0943-49c8-9ea6-13053d5ac3fe" 00:18:20.161 }, 00:18:20.161 { 00:18:20.161 "nsid": 2, 00:18:20.161 "bdev_name": "Malloc3", 00:18:20.161 "name": "Malloc3", 00:18:20.161 "nguid": "C9F3B4AB642D44C1871FC4085B7AB566", 00:18:20.161 "uuid": "c9f3b4ab-642d-44c1-871f-c4085b7ab566" 00:18:20.161 } 00:18:20.161 ] 00:18:20.161 }, 00:18:20.161 { 00:18:20.161 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:20.161 "subtype": "NVMe", 00:18:20.161 "listen_addresses": [ 00:18:20.161 { 00:18:20.161 "trtype": "VFIOUSER", 00:18:20.161 "adrfam": "IPv4", 00:18:20.161 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:20.161 "trsvcid": "0" 00:18:20.161 } 00:18:20.161 ], 00:18:20.161 "allow_any_host": true, 00:18:20.161 "hosts": [], 00:18:20.161 "serial_number": "SPDK2", 00:18:20.161 "model_number": "SPDK bdev Controller", 00:18:20.161 "max_namespaces": 32, 00:18:20.161 "min_cntlid": 1, 00:18:20.161 "max_cntlid": 65519, 00:18:20.161 "namespaces": [ 00:18:20.161 { 00:18:20.161 "nsid": 1, 00:18:20.161 "bdev_name": "Malloc2", 00:18:20.161 "name": "Malloc2", 00:18:20.161 "nguid": "CC06231CB006426E92E82062E31244A1", 00:18:20.161 "uuid": "cc06231c-b006-426e-92e8-2062e31244a1" 00:18:20.161 } 00:18:20.161 ] 00:18:20.161 } 00:18:20.161 ] 00:18:20.161 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4083961 00:18:20.161 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:20.161 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:20.161 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:20.161 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:20.161 [2024-07-21 08:14:29.680026] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:18:20.161 [2024-07-21 08:14:29.680067] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084046 ] 00:18:20.161 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.161 [2024-07-21 08:14:29.712578] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:20.161 [2024-07-21 08:14:29.721890] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:20.161 [2024-07-21 08:14:29.721934] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f69c35cc000 00:18:20.161 [2024-07-21 08:14:29.722889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:20.161 [2024-07-21 08:14:29.723891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:20.161 [2024-07-21 08:14:29.724899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:20.161 [2024-07-21 08:14:29.725922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:20.162 [2024-07-21 08:14:29.726929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:20.162 [2024-07-21 08:14:29.727925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:20.162 [2024-07-21 08:14:29.728949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:20.162 [2024-07-21 08:14:29.729943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:20.162 [2024-07-21 08:14:29.730950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:20.162 [2024-07-21 08:14:29.730973] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f69c2380000 00:18:20.162 [2024-07-21 08:14:29.732112] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:20.162 [2024-07-21 08:14:29.747249] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:20.162 [2024-07-21 08:14:29.747279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:20.162 [2024-07-21 08:14:29.752375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:20.162 [2024-07-21 08:14:29.752426] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:20.162 [2024-07-21 08:14:29.752510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:20.162 [2024-07-21 08:14:29.752532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:20.162 [2024-07-21 08:14:29.752543] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:20.162 [2024-07-21 08:14:29.753380] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:20.162 [2024-07-21 08:14:29.753401] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:20.162 [2024-07-21 08:14:29.753414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:20.162 [2024-07-21 08:14:29.754386] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:20.162 [2024-07-21 08:14:29.754407] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:20.162 [2024-07-21 08:14:29.754424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.755399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:20.162 [2024-07-21 08:14:29.755419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.756405] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:20.162 [2024-07-21 08:14:29.756425] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:20.162 [2024-07-21 08:14:29.756434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.756445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.756554] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:20.162 [2024-07-21 08:14:29.756563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.756571] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:20.162 [2024-07-21 08:14:29.757410] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:20.162 [2024-07-21 08:14:29.758417] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:20.162 [2024-07-21 08:14:29.759418] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:20.162 [2024-07-21 08:14:29.760420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:20.162 [2024-07-21 08:14:29.760486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:20.162 [2024-07-21 08:14:29.761432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:20.162 [2024-07-21 08:14:29.761451] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:20.162 [2024-07-21 08:14:29.761460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.761483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:20.162 [2024-07-21 08:14:29.761496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.761514] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.162 [2024-07-21 08:14:29.761523] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.162 [2024-07-21 08:14:29.761540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.162 [2024-07-21 08:14:29.769626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:20.162 [2024-07-21 08:14:29.769648] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:20.162 [2024-07-21 08:14:29.769664] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:20.162 [2024-07-21 08:14:29.769672] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:20.162 [2024-07-21 08:14:29.769680] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:20.162 [2024-07-21 08:14:29.769688] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:20.162 [2024-07-21 08:14:29.769695] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:20.162 [2024-07-21 08:14:29.769703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.769717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.769733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:20.162 [2024-07-21 08:14:29.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:20.162 [2024-07-21 08:14:29.777654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.162 [2024-07-21 08:14:29.777669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.162 [2024-07-21 08:14:29.777681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.162 [2024-07-21 08:14:29.777693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.162 [2024-07-21 08:14:29.777702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.777716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.777730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:20.162 [2024-07-21 08:14:29.785627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:20.162 [2024-07-21 08:14:29.785644] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:20.162 [2024-07-21 08:14:29.785653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.785664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.785674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:20.162 [2024-07-21 08:14:29.785688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.793697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.793718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.793732] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:20.422 [2024-07-21 08:14:29.793740] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:20.422 [2024-07-21 08:14:29.793750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.801640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.801678] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:20.422 [2024-07-21 08:14:29.801694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.801708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.801721] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.422 [2024-07-21 08:14:29.801728] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.422 [2024-07-21 08:14:29.801738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.809623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.809650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.809666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.809679] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.422 [2024-07-21 08:14:29.809687] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.422 [2024-07-21 08:14:29.809697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.817627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.817647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817709] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:20.422 [2024-07-21 08:14:29.817717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:20.422 [2024-07-21 08:14:29.817728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:20.422 [2024-07-21 08:14:29.817754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.825624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.825652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.833647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.841625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.841650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.849621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.849655] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:20.422 [2024-07-21 08:14:29.849666] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:20.422 [2024-07-21 08:14:29.849672] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:20.422 [2024-07-21 08:14:29.849678] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:20.422 [2024-07-21 08:14:29.849688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:20.422 [2024-07-21 08:14:29.849700] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:20.422 [2024-07-21 08:14:29.849708] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:20.422 [2024-07-21 08:14:29.849717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.849727] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:20.422 [2024-07-21 08:14:29.849735] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.422 [2024-07-21 08:14:29.849744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.849756] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:20.422 [2024-07-21 08:14:29.849764] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:20.422 [2024-07-21 08:14:29.849772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:20.422 [2024-07-21 08:14:29.857637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.857665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.857682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:20.422 [2024-07-21 08:14:29.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:20.422 ===================================================== 00:18:20.422 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:20.422 ===================================================== 00:18:20.422 Controller Capabilities/Features 00:18:20.422 ================================ 00:18:20.422 Vendor ID: 4e58 00:18:20.422 Subsystem Vendor ID: 4e58 00:18:20.422 Serial Number: SPDK2 00:18:20.422 Model Number: SPDK bdev Controller 00:18:20.422 Firmware Version: 24.09 00:18:20.422 Recommended Arb Burst: 6 00:18:20.422 IEEE OUI Identifier: 8d 6b 50 00:18:20.422 Multi-path I/O 00:18:20.422 May have multiple subsystem ports: Yes 00:18:20.422 May have multiple controllers: Yes 00:18:20.422 Associated with SR-IOV VF: No 00:18:20.422 Max Data Transfer Size: 131072 00:18:20.422 Max Number of Namespaces: 32 00:18:20.422 Max Number of I/O Queues: 127 00:18:20.422 NVMe Specification Version (VS): 1.3 00:18:20.422 NVMe Specification Version (Identify): 1.3 00:18:20.422 Maximum Queue Entries: 256 00:18:20.422 Contiguous Queues Required: Yes 00:18:20.422 Arbitration Mechanisms Supported 00:18:20.422 Weighted Round Robin: Not Supported 00:18:20.422 Vendor Specific: Not Supported 00:18:20.422 Reset Timeout: 15000 ms 00:18:20.422 Doorbell Stride: 4 bytes 00:18:20.422 NVM Subsystem Reset: Not Supported 00:18:20.422 Command Sets Supported 00:18:20.422 NVM Command Set: Supported 00:18:20.422 Boot Partition: Not Supported 00:18:20.422 Memory Page Size Minimum: 4096 bytes 00:18:20.422 Memory Page Size Maximum: 4096 bytes 00:18:20.422 Persistent Memory Region: Not Supported 00:18:20.422 Optional Asynchronous Events Supported 00:18:20.422 Namespace Attribute Notices: Supported 00:18:20.422 Firmware Activation Notices: Not Supported 00:18:20.422 ANA Change Notices: Not Supported 00:18:20.422 PLE Aggregate Log Change Notices: Not Supported 00:18:20.422 LBA Status Info Alert Notices: Not Supported 00:18:20.422 EGE Aggregate Log Change Notices: Not Supported 00:18:20.422 Normal NVM Subsystem Shutdown event: Not Supported 00:18:20.422 Zone Descriptor Change Notices: Not Supported 00:18:20.422 Discovery Log Change Notices: Not Supported 00:18:20.422 Controller Attributes 00:18:20.422 128-bit Host Identifier: Supported 00:18:20.422 Non-Operational Permissive Mode: Not Supported 00:18:20.423 NVM Sets: Not Supported 00:18:20.423 Read Recovery Levels: Not Supported 00:18:20.423 Endurance Groups: Not Supported 00:18:20.423 Predictable Latency Mode: Not Supported 00:18:20.423 Traffic Based Keep ALive: Not Supported 00:18:20.423 Namespace Granularity: Not Supported 00:18:20.423 SQ Associations: Not Supported 00:18:20.423 UUID List: Not Supported 00:18:20.423 Multi-Domain Subsystem: Not Supported 00:18:20.423 Fixed Capacity Management: Not Supported 00:18:20.423 Variable Capacity Management: Not Supported 00:18:20.423 Delete Endurance Group: Not Supported 00:18:20.423 Delete NVM Set: Not Supported 00:18:20.423 Extended LBA Formats Supported: Not Supported 00:18:20.423 Flexible Data Placement Supported: Not Supported 00:18:20.423 00:18:20.423 Controller Memory Buffer Support 00:18:20.423 ================================ 00:18:20.423 Supported: No 00:18:20.423 00:18:20.423 Persistent Memory Region Support 00:18:20.423 ================================ 00:18:20.423 Supported: No 00:18:20.423 00:18:20.423 Admin Command Set Attributes 00:18:20.423 ============================ 00:18:20.423 Security Send/Receive: Not Supported 00:18:20.423 Format NVM: Not Supported 00:18:20.423 Firmware Activate/Download: Not Supported 00:18:20.423 Namespace Management: Not Supported 00:18:20.423 Device Self-Test: Not Supported 00:18:20.423 Directives: Not Supported 00:18:20.423 NVMe-MI: Not Supported 00:18:20.423 Virtualization Management: Not Supported 00:18:20.423 Doorbell Buffer Config: Not Supported 00:18:20.423 Get LBA Status Capability: Not Supported 00:18:20.423 Command & Feature Lockdown Capability: Not Supported 00:18:20.423 Abort Command Limit: 4 00:18:20.423 Async Event Request Limit: 4 00:18:20.423 Number of Firmware Slots: N/A 00:18:20.423 Firmware Slot 1 Read-Only: N/A 00:18:20.423 Firmware Activation Without Reset: N/A 00:18:20.423 Multiple Update Detection Support: N/A 00:18:20.423 Firmware Update Granularity: No Information Provided 00:18:20.423 Per-Namespace SMART Log: No 00:18:20.423 Asymmetric Namespace Access Log Page: Not Supported 00:18:20.423 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:20.423 Command Effects Log Page: Supported 00:18:20.423 Get Log Page Extended Data: Supported 00:18:20.423 Telemetry Log Pages: Not Supported 00:18:20.423 Persistent Event Log Pages: Not Supported 00:18:20.423 Supported Log Pages Log Page: May Support 00:18:20.423 Commands Supported & Effects Log Page: Not Supported 00:18:20.423 Feature Identifiers & Effects Log Page:May Support 00:18:20.423 NVMe-MI Commands & Effects Log Page: May Support 00:18:20.423 Data Area 4 for Telemetry Log: Not Supported 00:18:20.423 Error Log Page Entries Supported: 128 00:18:20.423 Keep Alive: Supported 00:18:20.423 Keep Alive Granularity: 10000 ms 00:18:20.423 00:18:20.423 NVM Command Set Attributes 00:18:20.423 ========================== 00:18:20.423 Submission Queue Entry Size 00:18:20.423 Max: 64 00:18:20.423 Min: 64 00:18:20.423 Completion Queue Entry Size 00:18:20.423 Max: 16 00:18:20.423 Min: 16 00:18:20.423 Number of Namespaces: 32 00:18:20.423 Compare Command: Supported 00:18:20.423 Write Uncorrectable Command: Not Supported 00:18:20.423 Dataset Management Command: Supported 00:18:20.423 Write Zeroes Command: Supported 00:18:20.423 Set Features Save Field: Not Supported 00:18:20.423 Reservations: Not Supported 00:18:20.423 Timestamp: Not Supported 00:18:20.423 Copy: Supported 00:18:20.423 Volatile Write Cache: Present 00:18:20.423 Atomic Write Unit (Normal): 1 00:18:20.423 Atomic Write Unit (PFail): 1 00:18:20.423 Atomic Compare & Write Unit: 1 00:18:20.423 Fused Compare & Write: Supported 00:18:20.423 Scatter-Gather List 00:18:20.423 SGL Command Set: Supported (Dword aligned) 00:18:20.423 SGL Keyed: Not Supported 00:18:20.423 SGL Bit Bucket Descriptor: Not Supported 00:18:20.423 SGL Metadata Pointer: Not Supported 00:18:20.423 Oversized SGL: Not Supported 00:18:20.423 SGL Metadata Address: Not Supported 00:18:20.423 SGL Offset: Not Supported 00:18:20.423 Transport SGL Data Block: Not Supported 00:18:20.423 Replay Protected Memory Block: Not Supported 00:18:20.423 00:18:20.423 Firmware Slot Information 00:18:20.423 ========================= 00:18:20.423 Active slot: 1 00:18:20.423 Slot 1 Firmware Revision: 24.09 00:18:20.423 00:18:20.423 00:18:20.423 Commands Supported and Effects 00:18:20.423 ============================== 00:18:20.423 Admin Commands 00:18:20.423 -------------- 00:18:20.423 Get Log Page (02h): Supported 00:18:20.423 Identify (06h): Supported 00:18:20.423 Abort (08h): Supported 00:18:20.423 Set Features (09h): Supported 00:18:20.423 Get Features (0Ah): Supported 00:18:20.423 Asynchronous Event Request (0Ch): Supported 00:18:20.423 Keep Alive (18h): Supported 00:18:20.423 I/O Commands 00:18:20.423 ------------ 00:18:20.423 Flush (00h): Supported LBA-Change 00:18:20.423 Write (01h): Supported LBA-Change 00:18:20.423 Read (02h): Supported 00:18:20.423 Compare (05h): Supported 00:18:20.423 Write Zeroes (08h): Supported LBA-Change 00:18:20.423 Dataset Management (09h): Supported LBA-Change 00:18:20.423 Copy (19h): Supported LBA-Change 00:18:20.423 00:18:20.423 Error Log 00:18:20.423 ========= 00:18:20.423 00:18:20.423 Arbitration 00:18:20.423 =========== 00:18:20.423 Arbitration Burst: 1 00:18:20.423 00:18:20.423 Power Management 00:18:20.423 ================ 00:18:20.423 Number of Power States: 1 00:18:20.423 Current Power State: Power State #0 00:18:20.423 Power State #0: 00:18:20.423 Max Power: 0.00 W 00:18:20.423 Non-Operational State: Operational 00:18:20.423 Entry Latency: Not Reported 00:18:20.423 Exit Latency: Not Reported 00:18:20.423 Relative Read Throughput: 0 00:18:20.423 Relative Read Latency: 0 00:18:20.423 Relative Write Throughput: 0 00:18:20.423 Relative Write Latency: 0 00:18:20.423 Idle Power: Not Reported 00:18:20.423 Active Power: Not Reported 00:18:20.423 Non-Operational Permissive Mode: Not Supported 00:18:20.423 00:18:20.423 Health Information 00:18:20.423 ================== 00:18:20.423 Critical Warnings: 00:18:20.423 Available Spare Space: OK 00:18:20.423 Temperature: OK 00:18:20.423 Device Reliability: OK 00:18:20.423 Read Only: No 00:18:20.423 Volatile Memory Backup: OK 00:18:20.423 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:20.423 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:20.423 Available Spare: 0% 00:18:20.423 Available Sp[2024-07-21 08:14:29.857817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:20.423 [2024-07-21 08:14:29.865626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:20.423 [2024-07-21 08:14:29.865682] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:20.423 [2024-07-21 08:14:29.865700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.423 [2024-07-21 08:14:29.865711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.423 [2024-07-21 08:14:29.865720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.423 [2024-07-21 08:14:29.865730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.423 [2024-07-21 08:14:29.865813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:20.423 [2024-07-21 08:14:29.865834] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:20.423 [2024-07-21 08:14:29.866811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:20.423 [2024-07-21 08:14:29.866879] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:20.423 [2024-07-21 08:14:29.866893] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:20.423 [2024-07-21 08:14:29.867822] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:20.423 [2024-07-21 08:14:29.867846] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:20.423 [2024-07-21 08:14:29.867896] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:20.423 [2024-07-21 08:14:29.869095] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:20.423 are Threshold: 0% 00:18:20.423 Life Percentage Used: 0% 00:18:20.423 Data Units Read: 0 00:18:20.423 Data Units Written: 0 00:18:20.423 Host Read Commands: 0 00:18:20.423 Host Write Commands: 0 00:18:20.423 Controller Busy Time: 0 minutes 00:18:20.423 Power Cycles: 0 00:18:20.423 Power On Hours: 0 hours 00:18:20.423 Unsafe Shutdowns: 0 00:18:20.423 Unrecoverable Media Errors: 0 00:18:20.423 Lifetime Error Log Entries: 0 00:18:20.423 Warning Temperature Time: 0 minutes 00:18:20.423 Critical Temperature Time: 0 minutes 00:18:20.423 00:18:20.423 Number of Queues 00:18:20.423 ================ 00:18:20.424 Number of I/O Submission Queues: 127 00:18:20.424 Number of I/O Completion Queues: 127 00:18:20.424 00:18:20.424 Active Namespaces 00:18:20.424 ================= 00:18:20.424 Namespace ID:1 00:18:20.424 Error Recovery Timeout: Unlimited 00:18:20.424 Command Set Identifier: NVM (00h) 00:18:20.424 Deallocate: Supported 00:18:20.424 Deallocated/Unwritten Error: Not Supported 00:18:20.424 Deallocated Read Value: Unknown 00:18:20.424 Deallocate in Write Zeroes: Not Supported 00:18:20.424 Deallocated Guard Field: 0xFFFF 00:18:20.424 Flush: Supported 00:18:20.424 Reservation: Supported 00:18:20.424 Namespace Sharing Capabilities: Multiple Controllers 00:18:20.424 Size (in LBAs): 131072 (0GiB) 00:18:20.424 Capacity (in LBAs): 131072 (0GiB) 00:18:20.424 Utilization (in LBAs): 131072 (0GiB) 00:18:20.424 NGUID: CC06231CB006426E92E82062E31244A1 00:18:20.424 UUID: cc06231c-b006-426e-92e8-2062e31244a1 00:18:20.424 Thin Provisioning: Not Supported 00:18:20.424 Per-NS Atomic Units: Yes 00:18:20.424 Atomic Boundary Size (Normal): 0 00:18:20.424 Atomic Boundary Size (PFail): 0 00:18:20.424 Atomic Boundary Offset: 0 00:18:20.424 Maximum Single Source Range Length: 65535 00:18:20.424 Maximum Copy Length: 65535 00:18:20.424 Maximum Source Range Count: 1 00:18:20.424 NGUID/EUI64 Never Reused: No 00:18:20.424 Namespace Write Protected: No 00:18:20.424 Number of LBA Formats: 1 00:18:20.424 Current LBA Format: LBA Format #00 00:18:20.424 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:20.424 00:18:20.424 08:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:20.424 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.681 [2024-07-21 08:14:30.095486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.955 Initializing NVMe Controllers 00:18:25.955 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:25.955 Initialization complete. Launching workers. 00:18:25.955 ======================================================== 00:18:25.955 Latency(us) 00:18:25.955 Device Information : IOPS MiB/s Average min max 00:18:25.955 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35217.59 137.57 3633.91 1157.07 7376.53 00:18:25.955 ======================================================== 00:18:25.955 Total : 35217.59 137.57 3633.91 1157.07 7376.53 00:18:25.955 00:18:25.955 [2024-07-21 08:14:35.198963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.955 08:14:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:25.955 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.955 [2024-07-21 08:14:35.432611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.244 Initializing NVMe Controllers 00:18:31.244 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:31.244 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:31.244 Initialization complete. Launching workers. 00:18:31.244 ======================================================== 00:18:31.244 Latency(us) 00:18:31.244 Device Information : IOPS MiB/s Average min max 00:18:31.244 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32228.92 125.89 3970.52 1195.72 8265.74 00:18:31.244 ======================================================== 00:18:31.244 Total : 32228.92 125.89 3970.52 1195.72 8265.74 00:18:31.244 00:18:31.244 [2024-07-21 08:14:40.449916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.244 08:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:31.244 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.244 [2024-07-21 08:14:40.657746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:36.513 [2024-07-21 08:14:45.793758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:36.513 Initializing NVMe Controllers 00:18:36.513 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:36.513 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:36.513 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:36.513 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:36.513 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:36.513 Initialization complete. Launching workers. 00:18:36.513 Starting thread on core 2 00:18:36.513 Starting thread on core 3 00:18:36.513 Starting thread on core 1 00:18:36.514 08:14:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:36.514 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.514 [2024-07-21 08:14:46.095090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.798 [2024-07-21 08:14:49.179479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.798 Initializing NVMe Controllers 00:18:39.798 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.798 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.798 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:39.798 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:39.798 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:39.798 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:39.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:39.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:39.798 Initialization complete. Launching workers. 00:18:39.798 Starting thread on core 1 with urgent priority queue 00:18:39.798 Starting thread on core 2 with urgent priority queue 00:18:39.798 Starting thread on core 3 with urgent priority queue 00:18:39.798 Starting thread on core 0 with urgent priority queue 00:18:39.798 SPDK bdev Controller (SPDK2 ) core 0: 3057.00 IO/s 32.71 secs/100000 ios 00:18:39.798 SPDK bdev Controller (SPDK2 ) core 1: 3120.00 IO/s 32.05 secs/100000 ios 00:18:39.798 SPDK bdev Controller (SPDK2 ) core 2: 2533.67 IO/s 39.47 secs/100000 ios 00:18:39.798 SPDK bdev Controller (SPDK2 ) core 3: 2847.67 IO/s 35.12 secs/100000 ios 00:18:39.798 ======================================================== 00:18:39.798 00:18:39.798 08:14:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:39.798 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.057 [2024-07-21 08:14:49.463178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.057 Initializing NVMe Controllers 00:18:40.057 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.057 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.057 Namespace ID: 1 size: 0GB 00:18:40.057 Initialization complete. 00:18:40.057 INFO: using host memory buffer for IO 00:18:40.057 Hello world! 00:18:40.057 [2024-07-21 08:14:49.472234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.057 08:14:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.315 [2024-07-21 08:14:49.751439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.285 Initializing NVMe Controllers 00:18:41.285 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.285 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.285 Initialization complete. Launching workers. 00:18:41.285 submit (in ns) avg, min, max = 8823.9, 3582.2, 4030217.8 00:18:41.285 complete (in ns) avg, min, max = 23719.4, 2070.0, 7007324.4 00:18:41.285 00:18:41.285 Submit histogram 00:18:41.285 ================ 00:18:41.285 Range in us Cumulative Count 00:18:41.285 3.579 - 3.603: 0.2318% ( 31) 00:18:41.285 3.603 - 3.627: 0.8301% ( 80) 00:18:41.285 3.627 - 3.650: 2.4678% ( 219) 00:18:41.285 3.650 - 3.674: 6.2818% ( 510) 00:18:41.285 3.674 - 3.698: 11.9653% ( 760) 00:18:41.285 3.698 - 3.721: 21.3955% ( 1261) 00:18:41.285 3.721 - 3.745: 29.8983% ( 1137) 00:18:41.285 3.745 - 3.769: 39.0368% ( 1222) 00:18:41.285 3.769 - 3.793: 46.0215% ( 934) 00:18:41.285 3.793 - 3.816: 52.9838% ( 931) 00:18:41.285 3.816 - 3.840: 57.8522% ( 651) 00:18:41.285 3.840 - 3.864: 62.5037% ( 622) 00:18:41.285 3.864 - 3.887: 66.4373% ( 526) 00:18:41.285 3.887 - 3.911: 69.8699% ( 459) 00:18:41.285 3.911 - 3.935: 73.5118% ( 487) 00:18:41.285 3.935 - 3.959: 76.8322% ( 444) 00:18:41.285 3.959 - 3.982: 80.1899% ( 449) 00:18:41.286 3.982 - 4.006: 83.5328% ( 447) 00:18:41.286 4.006 - 4.030: 85.9408% ( 322) 00:18:41.286 4.030 - 4.053: 87.8702% ( 258) 00:18:41.286 4.053 - 4.077: 89.7024% ( 245) 00:18:41.286 4.077 - 4.101: 91.3401% ( 219) 00:18:41.286 4.101 - 4.124: 92.8657% ( 204) 00:18:41.286 4.124 - 4.148: 93.8229% ( 128) 00:18:41.286 4.148 - 4.172: 94.7128% ( 119) 00:18:41.286 4.172 - 4.196: 95.2662% ( 74) 00:18:41.286 4.196 - 4.219: 95.8047% ( 72) 00:18:41.286 4.219 - 4.243: 96.0888% ( 38) 00:18:41.286 4.243 - 4.267: 96.3207% ( 31) 00:18:41.286 4.267 - 4.290: 96.4478% ( 17) 00:18:41.286 4.290 - 4.314: 96.5076% ( 8) 00:18:41.286 4.314 - 4.338: 96.6198% ( 15) 00:18:41.286 4.338 - 4.361: 96.7320% ( 15) 00:18:41.286 4.361 - 4.385: 96.8142% ( 11) 00:18:41.286 4.385 - 4.409: 96.8965% ( 11) 00:18:41.286 4.409 - 4.433: 96.9189% ( 3) 00:18:41.286 4.433 - 4.456: 96.9788% ( 8) 00:18:41.286 4.456 - 4.480: 97.0311% ( 7) 00:18:41.286 4.480 - 4.504: 97.0610% ( 4) 00:18:41.286 4.504 - 4.527: 97.0760% ( 2) 00:18:41.286 4.527 - 4.551: 97.1134% ( 5) 00:18:41.286 4.551 - 4.575: 97.1208% ( 1) 00:18:41.286 4.599 - 4.622: 97.1358% ( 2) 00:18:41.286 4.646 - 4.670: 97.1582% ( 3) 00:18:41.286 4.693 - 4.717: 97.1657% ( 1) 00:18:41.286 4.717 - 4.741: 97.1807% ( 2) 00:18:41.286 4.741 - 4.764: 97.1956% ( 2) 00:18:41.286 4.764 - 4.788: 97.2031% ( 1) 00:18:41.286 4.836 - 4.859: 97.2255% ( 3) 00:18:41.286 4.859 - 4.883: 97.2480% ( 3) 00:18:41.286 4.883 - 4.907: 97.3003% ( 7) 00:18:41.286 4.907 - 4.930: 97.3153% ( 2) 00:18:41.286 4.930 - 4.954: 97.3751% ( 8) 00:18:41.286 4.954 - 4.978: 97.3975% ( 3) 00:18:41.286 4.978 - 5.001: 97.4349% ( 5) 00:18:41.286 5.001 - 5.025: 97.5097% ( 10) 00:18:41.286 5.025 - 5.049: 97.5322% ( 3) 00:18:41.286 5.049 - 5.073: 97.5920% ( 8) 00:18:41.286 5.073 - 5.096: 97.6369% ( 6) 00:18:41.286 5.096 - 5.120: 97.6668% ( 4) 00:18:41.286 5.120 - 5.144: 97.6817% ( 2) 00:18:41.286 5.144 - 5.167: 97.7042% ( 3) 00:18:41.286 5.167 - 5.191: 97.7640% ( 8) 00:18:41.286 5.191 - 5.215: 97.8163% ( 7) 00:18:41.286 5.215 - 5.239: 97.8238% ( 1) 00:18:41.286 5.239 - 5.262: 97.8911% ( 9) 00:18:41.286 5.262 - 5.286: 97.9061% ( 2) 00:18:41.286 5.286 - 5.310: 97.9136% ( 1) 00:18:41.286 5.310 - 5.333: 97.9360% ( 3) 00:18:41.286 5.333 - 5.357: 97.9584% ( 3) 00:18:41.286 5.381 - 5.404: 97.9659% ( 1) 00:18:41.286 5.428 - 5.452: 97.9883% ( 3) 00:18:41.286 5.452 - 5.476: 98.0108% ( 3) 00:18:41.286 5.476 - 5.499: 98.0182% ( 1) 00:18:41.286 5.523 - 5.547: 98.0257% ( 1) 00:18:41.286 5.547 - 5.570: 98.0332% ( 1) 00:18:41.286 5.641 - 5.665: 98.0407% ( 1) 00:18:41.286 5.713 - 5.736: 98.0482% ( 1) 00:18:41.286 5.736 - 5.760: 98.0556% ( 1) 00:18:41.286 5.879 - 5.902: 98.0631% ( 1) 00:18:41.286 6.068 - 6.116: 98.0706% ( 1) 00:18:41.286 6.116 - 6.163: 98.0930% ( 3) 00:18:41.286 6.258 - 6.305: 98.1005% ( 1) 00:18:41.286 6.353 - 6.400: 98.1155% ( 2) 00:18:41.286 6.447 - 6.495: 98.1229% ( 1) 00:18:41.286 6.542 - 6.590: 98.1304% ( 1) 00:18:41.286 6.779 - 6.827: 98.1454% ( 2) 00:18:41.286 6.827 - 6.874: 98.1603% ( 2) 00:18:41.286 7.064 - 7.111: 98.1678% ( 1) 00:18:41.286 7.111 - 7.159: 98.1828% ( 2) 00:18:41.286 7.159 - 7.206: 98.1977% ( 2) 00:18:41.286 7.206 - 7.253: 98.2052% ( 1) 00:18:41.286 7.253 - 7.301: 98.2127% ( 1) 00:18:41.286 7.301 - 7.348: 98.2202% ( 1) 00:18:41.286 7.396 - 7.443: 98.2276% ( 1) 00:18:41.286 7.443 - 7.490: 98.2576% ( 4) 00:18:41.286 7.680 - 7.727: 98.2949% ( 5) 00:18:41.286 7.727 - 7.775: 98.3024% ( 1) 00:18:41.286 7.822 - 7.870: 98.3174% ( 2) 00:18:41.286 7.870 - 7.917: 98.3249% ( 1) 00:18:41.286 7.917 - 7.964: 98.3323% ( 1) 00:18:41.286 8.012 - 8.059: 98.3548% ( 3) 00:18:41.286 8.107 - 8.154: 98.3697% ( 2) 00:18:41.286 8.249 - 8.296: 98.3847% ( 2) 00:18:41.286 8.296 - 8.344: 98.4071% ( 3) 00:18:41.286 8.344 - 8.391: 98.4146% ( 1) 00:18:41.286 8.439 - 8.486: 98.4221% ( 1) 00:18:41.286 8.533 - 8.581: 98.4520% ( 4) 00:18:41.286 8.723 - 8.770: 98.4669% ( 2) 00:18:41.286 8.770 - 8.818: 98.4744% ( 1) 00:18:41.286 8.818 - 8.865: 98.4819% ( 1) 00:18:41.286 8.865 - 8.913: 98.4969% ( 2) 00:18:41.286 8.913 - 8.960: 98.5118% ( 2) 00:18:41.286 9.007 - 9.055: 98.5268% ( 2) 00:18:41.286 9.055 - 9.102: 98.5417% ( 2) 00:18:41.286 9.102 - 9.150: 98.5492% ( 1) 00:18:41.286 9.292 - 9.339: 98.5642% ( 2) 00:18:41.286 9.434 - 9.481: 98.5791% ( 2) 00:18:41.286 9.481 - 9.529: 98.5866% ( 1) 00:18:41.286 9.529 - 9.576: 98.5941% ( 1) 00:18:41.286 9.576 - 9.624: 98.6165% ( 3) 00:18:41.286 9.624 - 9.671: 98.6315% ( 2) 00:18:41.286 9.813 - 9.861: 98.6389% ( 1) 00:18:41.286 9.861 - 9.908: 98.6539% ( 2) 00:18:41.286 9.908 - 9.956: 98.6614% ( 1) 00:18:41.286 9.956 - 10.003: 98.6763% ( 2) 00:18:41.286 10.193 - 10.240: 98.6838% ( 1) 00:18:41.286 10.287 - 10.335: 98.7063% ( 3) 00:18:41.286 10.382 - 10.430: 98.7137% ( 1) 00:18:41.286 10.477 - 10.524: 98.7212% ( 1) 00:18:41.286 10.524 - 10.572: 98.7287% ( 1) 00:18:41.286 10.572 - 10.619: 98.7362% ( 1) 00:18:41.286 10.619 - 10.667: 98.7436% ( 1) 00:18:41.286 10.714 - 10.761: 98.7511% ( 1) 00:18:41.286 11.046 - 11.093: 98.7586% ( 1) 00:18:41.286 11.093 - 11.141: 98.7661% ( 1) 00:18:41.286 11.283 - 11.330: 98.7736% ( 1) 00:18:41.286 11.330 - 11.378: 98.7810% ( 1) 00:18:41.286 11.425 - 11.473: 98.7885% ( 1) 00:18:41.286 11.520 - 11.567: 98.7960% ( 1) 00:18:41.286 11.567 - 11.615: 98.8109% ( 2) 00:18:41.286 11.615 - 11.662: 98.8184% ( 1) 00:18:41.286 11.710 - 11.757: 98.8259% ( 1) 00:18:41.286 11.757 - 11.804: 98.8334% ( 1) 00:18:41.286 11.994 - 12.041: 98.8409% ( 1) 00:18:41.286 12.089 - 12.136: 98.8483% ( 1) 00:18:41.286 12.136 - 12.231: 98.8558% ( 1) 00:18:41.286 12.421 - 12.516: 98.8633% ( 1) 00:18:41.286 12.610 - 12.705: 98.8708% ( 1) 00:18:41.286 12.895 - 12.990: 98.8857% ( 2) 00:18:41.286 13.274 - 13.369: 98.9007% ( 2) 00:18:41.286 13.464 - 13.559: 98.9156% ( 2) 00:18:41.286 13.559 - 13.653: 98.9231% ( 1) 00:18:41.286 13.748 - 13.843: 98.9306% ( 1) 00:18:41.286 14.033 - 14.127: 98.9381% ( 1) 00:18:41.286 14.222 - 14.317: 98.9456% ( 1) 00:18:41.286 14.317 - 14.412: 98.9530% ( 1) 00:18:41.286 14.601 - 14.696: 98.9680% ( 2) 00:18:41.286 14.696 - 14.791: 98.9755% ( 1) 00:18:41.286 14.791 - 14.886: 98.9829% ( 1) 00:18:41.286 16.972 - 17.067: 98.9904% ( 1) 00:18:41.286 17.256 - 17.351: 99.0054% ( 2) 00:18:41.286 17.351 - 17.446: 99.0353% ( 4) 00:18:41.286 17.446 - 17.541: 99.0503% ( 2) 00:18:41.286 17.541 - 17.636: 99.1101% ( 8) 00:18:41.286 17.636 - 17.730: 99.2148% ( 14) 00:18:41.286 17.730 - 17.825: 99.2522% ( 5) 00:18:41.286 17.825 - 17.920: 99.2896% ( 5) 00:18:41.286 17.920 - 18.015: 99.3643% ( 10) 00:18:41.286 18.015 - 18.110: 99.3868% ( 3) 00:18:41.286 18.110 - 18.204: 99.4391% ( 7) 00:18:41.286 18.204 - 18.299: 99.5139% ( 10) 00:18:41.286 18.299 - 18.394: 99.5812% ( 9) 00:18:41.286 18.394 - 18.489: 99.5962% ( 2) 00:18:41.286 18.489 - 18.584: 99.6560% ( 8) 00:18:41.286 18.584 - 18.679: 99.6710% ( 2) 00:18:41.286 18.679 - 18.773: 99.7158% ( 6) 00:18:41.286 18.773 - 18.868: 99.7457% ( 4) 00:18:41.286 18.868 - 18.963: 99.7757% ( 4) 00:18:41.286 19.153 - 19.247: 99.7831% ( 1) 00:18:41.286 19.247 - 19.342: 99.7906% ( 1) 00:18:41.286 19.342 - 19.437: 99.7981% ( 1) 00:18:41.286 19.437 - 19.532: 99.8056% ( 1) 00:18:41.286 19.532 - 19.627: 99.8280% ( 3) 00:18:41.286 19.816 - 19.911: 99.8355% ( 1) 00:18:41.286 19.911 - 20.006: 99.8430% ( 1) 00:18:41.286 20.101 - 20.196: 99.8504% ( 1) 00:18:41.286 20.290 - 20.385: 99.8579% ( 1) 00:18:41.286 22.850 - 22.945: 99.8654% ( 1) 00:18:41.286 24.273 - 24.462: 99.8729% ( 1) 00:18:41.286 26.359 - 26.548: 99.8803% ( 1) 00:18:41.286 3980.705 - 4004.978: 99.9626% ( 11) 00:18:41.286 4004.978 - 4029.250: 99.9925% ( 4) 00:18:41.286 4029.250 - 4053.523: 100.0000% ( 1) 00:18:41.286 00:18:41.286 Complete histogram 00:18:41.286 ================== 00:18:41.286 Range in us Cumulative Count 00:18:41.286 2.062 - 2.074: 0.3664% ( 49) 00:18:41.286 2.074 - 2.086: 30.5414% ( 4035) 00:18:41.286 2.086 - 2.098: 44.5408% ( 1872) 00:18:41.286 2.098 - 2.110: 47.1882% ( 354) 00:18:41.286 2.110 - 2.121: 56.6856% ( 1270) 00:18:41.286 2.121 - 2.133: 60.2528% ( 477) 00:18:41.286 2.133 - 2.145: 63.8124% ( 476) 00:18:41.286 2.145 - 2.157: 76.2937% ( 1669) 00:18:41.286 2.157 - 2.169: 79.3449% ( 408) 00:18:41.286 2.169 - 2.181: 81.3640% ( 270) 00:18:41.286 2.181 - 2.193: 85.6117% ( 568) 00:18:41.286 2.193 - 2.204: 87.0850% ( 197) 00:18:41.286 2.204 - 2.216: 87.9375% ( 114) 00:18:41.286 2.216 - 2.228: 90.5025% ( 343) 00:18:41.286 2.228 - 2.240: 92.4544% ( 261) 00:18:41.286 2.240 - 2.252: 93.4565% ( 134) 00:18:41.286 2.252 - 2.264: 94.2940% ( 112) 00:18:41.286 2.264 - 2.276: 94.6306% ( 45) 00:18:41.286 2.276 - 2.287: 94.8549% ( 30) 00:18:41.287 2.287 - 2.299: 95.0793% ( 30) 00:18:41.287 2.299 - 2.311: 95.5504% ( 63) 00:18:41.287 2.311 - 2.323: 95.8794% ( 44) 00:18:41.287 2.323 - 2.335: 95.9617% ( 11) 00:18:41.287 2.335 - 2.347: 96.0290% ( 9) 00:18:41.287 2.347 - 2.359: 96.1711% ( 19) 00:18:41.287 2.359 - 2.370: 96.4403% ( 36) 00:18:41.287 2.370 - 2.382: 96.7843% ( 46) 00:18:41.287 2.382 - 2.394: 97.1208% ( 45) 00:18:41.287 2.394 - 2.406: 97.4499% ( 44) 00:18:41.287 2.406 - 2.418: 97.7116% ( 35) 00:18:41.287 2.418 - 2.430: 97.8388% ( 17) 00:18:41.287 2.430 - 2.441: 97.9435% ( 14) 00:18:41.287 2.441 - 2.453: 98.0631% ( 16) 00:18:41.287 2.453 - 2.465: 98.1529% ( 12) 00:18:41.287 2.465 - 2.477: 98.2127% ( 8) 00:18:41.287 2.477 - 2.489: 98.2351% ( 3) 00:18:41.287 2.489 - 2.501: 98.2725% ( 5) 00:18:41.287 2.501 - 2.513: 98.3024% ( 4) 00:18:41.287 2.513 - 2.524: 98.3548% ( 7) 00:18:41.287 2.524 - 2.536: 98.3772% ( 3) 00:18:41.287 2.536 - 2.548: 98.3922% ( 2) 00:18:41.287 2.560 - 2.572: 98.3996% ( 1) 00:18:41.287 2.572 - 2.584: 98.4071% ( 1) 00:18:41.287 2.607 - 2.619: 98.4296% ( 3) 00:18:41.287 2.619 - 2.631: 98.4370% ( 1) 00:18:41.287 2.631 - 2.643: 98.4520% ( 2) 00:18:41.287 2.643 - 2.655: 98.4595% ( 1) 00:18:41.287 2.655 - 2.667: 98.4669% ( 1) 00:18:41.287 2.679 - 2.690: 98.4744% ( 1) 00:18:41.287 2.714 - 2.726: 98.4819% ( 1) 00:18:41.287 2.726 - 2.738: 98.4894% ( 1) 00:18:41.287 2.750 - 2.761: 98.5043% ( 2) 00:18:41.287 2.761 - 2.773: 98.5118% ( 1) 00:18:41.287 2.821 - 2.833: 98.5193% ( 1) 00:18:41.287 2.833 - 2.844: 98.5268% ( 1) 00:18:41.287 2.844 - 2.856: 98.5343% ( 1) 00:18:41.287 2.892 - 2.904: 98.5417% ( 1) 00:18:41.287 2.939 - 2.951: 9[2024-07-21 08:14:50.847352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.287 8.5492% ( 1) 00:18:41.287 3.010 - 3.022: 98.5567% ( 1) 00:18:41.287 3.176 - 3.200: 98.5642% ( 1) 00:18:41.287 3.508 - 3.532: 98.5716% ( 1) 00:18:41.287 3.532 - 3.556: 98.5791% ( 1) 00:18:41.287 3.556 - 3.579: 98.5866% ( 1) 00:18:41.287 3.579 - 3.603: 98.6016% ( 2) 00:18:41.287 3.603 - 3.627: 98.6090% ( 1) 00:18:41.287 3.650 - 3.674: 98.6240% ( 2) 00:18:41.287 3.674 - 3.698: 98.6315% ( 1) 00:18:41.287 3.698 - 3.721: 98.6389% ( 1) 00:18:41.287 3.721 - 3.745: 98.6539% ( 2) 00:18:41.287 3.745 - 3.769: 98.6614% ( 1) 00:18:41.287 3.793 - 3.816: 98.6689% ( 1) 00:18:41.287 3.816 - 3.840: 98.6763% ( 1) 00:18:41.287 3.840 - 3.864: 98.6913% ( 2) 00:18:41.287 3.887 - 3.911: 98.6988% ( 1) 00:18:41.287 4.006 - 4.030: 98.7137% ( 2) 00:18:41.287 4.077 - 4.101: 98.7212% ( 1) 00:18:41.287 4.196 - 4.219: 98.7287% ( 1) 00:18:41.287 4.243 - 4.267: 98.7362% ( 1) 00:18:41.287 4.267 - 4.290: 98.7436% ( 1) 00:18:41.287 4.361 - 4.385: 98.7511% ( 1) 00:18:41.287 4.504 - 4.527: 98.7586% ( 1) 00:18:41.287 4.551 - 4.575: 98.7661% ( 1) 00:18:41.287 4.788 - 4.812: 98.7736% ( 1) 00:18:41.287 4.859 - 4.883: 98.7885% ( 2) 00:18:41.287 4.954 - 4.978: 98.7960% ( 1) 00:18:41.287 5.001 - 5.025: 98.8184% ( 3) 00:18:41.287 5.025 - 5.049: 98.8259% ( 1) 00:18:41.287 5.096 - 5.120: 98.8409% ( 2) 00:18:41.287 5.926 - 5.950: 98.8483% ( 1) 00:18:41.287 5.950 - 5.973: 98.8558% ( 1) 00:18:41.287 6.400 - 6.447: 98.8633% ( 1) 00:18:41.287 6.590 - 6.637: 98.8708% ( 1) 00:18:41.287 6.684 - 6.732: 98.8783% ( 1) 00:18:41.287 6.779 - 6.827: 98.8932% ( 2) 00:18:41.287 6.827 - 6.874: 98.9007% ( 1) 00:18:41.287 8.344 - 8.391: 98.9082% ( 1) 00:18:41.287 13.084 - 13.179: 98.9156% ( 1) 00:18:41.287 15.455 - 15.550: 98.9306% ( 2) 00:18:41.287 15.550 - 15.644: 98.9381% ( 1) 00:18:41.287 15.739 - 15.834: 98.9605% ( 3) 00:18:41.287 15.834 - 15.929: 98.9680% ( 1) 00:18:41.287 15.929 - 16.024: 98.9904% ( 3) 00:18:41.287 16.024 - 16.119: 99.0129% ( 3) 00:18:41.287 16.119 - 16.213: 99.0428% ( 4) 00:18:41.287 16.213 - 16.308: 99.0577% ( 2) 00:18:41.287 16.308 - 16.403: 99.0652% ( 1) 00:18:41.287 16.403 - 16.498: 99.1101% ( 6) 00:18:41.287 16.498 - 16.593: 99.1624% ( 7) 00:18:41.287 16.593 - 16.687: 99.2073% ( 6) 00:18:41.287 16.687 - 16.782: 99.2223% ( 2) 00:18:41.287 16.782 - 16.877: 99.2447% ( 3) 00:18:41.287 16.877 - 16.972: 99.2746% ( 4) 00:18:41.287 16.972 - 17.067: 99.2896% ( 2) 00:18:41.287 17.067 - 17.161: 99.3120% ( 3) 00:18:41.287 17.161 - 17.256: 99.3344% ( 3) 00:18:41.287 17.256 - 17.351: 99.3419% ( 1) 00:18:41.287 17.541 - 17.636: 99.3494% ( 1) 00:18:41.287 17.636 - 17.730: 99.3793% ( 4) 00:18:41.287 17.825 - 17.920: 99.4092% ( 4) 00:18:41.287 18.015 - 18.110: 99.4167% ( 1) 00:18:41.287 18.204 - 18.299: 99.4242% ( 1) 00:18:41.287 18.394 - 18.489: 99.4466% ( 3) 00:18:41.287 18.489 - 18.584: 99.4541% ( 1) 00:18:41.287 18.584 - 18.679: 99.4616% ( 1) 00:18:41.287 27.307 - 27.496: 99.4690% ( 1) 00:18:41.287 3009.801 - 3021.938: 99.4765% ( 1) 00:18:41.287 3228.255 - 3252.527: 99.4840% ( 1) 00:18:41.287 3980.705 - 4004.978: 99.8056% ( 43) 00:18:41.287 4004.978 - 4029.250: 99.9850% ( 24) 00:18:41.287 5995.330 - 6019.603: 99.9925% ( 1) 00:18:41.287 6990.507 - 7039.052: 100.0000% ( 1) 00:18:41.287 00:18:41.287 08:14:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:41.287 08:14:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:41.287 08:14:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:41.287 08:14:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:41.287 08:14:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:41.545 [ 00:18:41.545 { 00:18:41.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.545 "subtype": "Discovery", 00:18:41.545 "listen_addresses": [], 00:18:41.545 "allow_any_host": true, 00:18:41.545 "hosts": [] 00:18:41.545 }, 00:18:41.545 { 00:18:41.545 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:41.545 "subtype": "NVMe", 00:18:41.545 "listen_addresses": [ 00:18:41.545 { 00:18:41.545 "trtype": "VFIOUSER", 00:18:41.545 "adrfam": "IPv4", 00:18:41.545 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:41.545 "trsvcid": "0" 00:18:41.545 } 00:18:41.545 ], 00:18:41.545 "allow_any_host": true, 00:18:41.545 "hosts": [], 00:18:41.545 "serial_number": "SPDK1", 00:18:41.545 "model_number": "SPDK bdev Controller", 00:18:41.545 "max_namespaces": 32, 00:18:41.545 "min_cntlid": 1, 00:18:41.545 "max_cntlid": 65519, 00:18:41.545 "namespaces": [ 00:18:41.545 { 00:18:41.545 "nsid": 1, 00:18:41.545 "bdev_name": "Malloc1", 00:18:41.545 "name": "Malloc1", 00:18:41.545 "nguid": "18F86681094349C89EA613053D5AC3FE", 00:18:41.545 "uuid": "18f86681-0943-49c8-9ea6-13053d5ac3fe" 00:18:41.545 }, 00:18:41.545 { 00:18:41.545 "nsid": 2, 00:18:41.545 "bdev_name": "Malloc3", 00:18:41.545 "name": "Malloc3", 00:18:41.545 "nguid": "C9F3B4AB642D44C1871FC4085B7AB566", 00:18:41.545 "uuid": "c9f3b4ab-642d-44c1-871f-c4085b7ab566" 00:18:41.545 } 00:18:41.545 ] 00:18:41.545 }, 00:18:41.545 { 00:18:41.545 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:41.545 "subtype": "NVMe", 00:18:41.545 "listen_addresses": [ 00:18:41.545 { 00:18:41.545 "trtype": "VFIOUSER", 00:18:41.545 "adrfam": "IPv4", 00:18:41.545 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:41.545 "trsvcid": "0" 00:18:41.545 } 00:18:41.545 ], 00:18:41.545 "allow_any_host": true, 00:18:41.545 "hosts": [], 00:18:41.545 "serial_number": "SPDK2", 00:18:41.545 "model_number": "SPDK bdev Controller", 00:18:41.545 "max_namespaces": 32, 00:18:41.545 "min_cntlid": 1, 00:18:41.545 "max_cntlid": 65519, 00:18:41.545 "namespaces": [ 00:18:41.545 { 00:18:41.545 "nsid": 1, 00:18:41.545 "bdev_name": "Malloc2", 00:18:41.545 "name": "Malloc2", 00:18:41.545 "nguid": "CC06231CB006426E92E82062E31244A1", 00:18:41.545 "uuid": "cc06231c-b006-426e-92e8-2062e31244a1" 00:18:41.545 } 00:18:41.545 ] 00:18:41.545 } 00:18:41.545 ] 00:18:41.545 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:41.545 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4086540 00:18:41.545 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:41.546 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:41.803 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.803 [2024-07-21 08:14:51.299094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.803 Malloc4 00:18:41.803 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:42.060 [2024-07-21 08:14:51.658683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:42.060 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:42.318 Asynchronous Event Request test 00:18:42.318 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:42.318 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:42.318 Registering asynchronous event callbacks... 00:18:42.318 Starting namespace attribute notice tests for all controllers... 00:18:42.318 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:42.318 aer_cb - Changed Namespace 00:18:42.318 Cleaning up... 00:18:42.318 [ 00:18:42.318 { 00:18:42.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:42.318 "subtype": "Discovery", 00:18:42.318 "listen_addresses": [], 00:18:42.318 "allow_any_host": true, 00:18:42.318 "hosts": [] 00:18:42.318 }, 00:18:42.318 { 00:18:42.319 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:42.319 "subtype": "NVMe", 00:18:42.319 "listen_addresses": [ 00:18:42.319 { 00:18:42.319 "trtype": "VFIOUSER", 00:18:42.319 "adrfam": "IPv4", 00:18:42.319 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:42.319 "trsvcid": "0" 00:18:42.319 } 00:18:42.319 ], 00:18:42.319 "allow_any_host": true, 00:18:42.319 "hosts": [], 00:18:42.319 "serial_number": "SPDK1", 00:18:42.319 "model_number": "SPDK bdev Controller", 00:18:42.319 "max_namespaces": 32, 00:18:42.319 "min_cntlid": 1, 00:18:42.319 "max_cntlid": 65519, 00:18:42.319 "namespaces": [ 00:18:42.319 { 00:18:42.319 "nsid": 1, 00:18:42.319 "bdev_name": "Malloc1", 00:18:42.319 "name": "Malloc1", 00:18:42.319 "nguid": "18F86681094349C89EA613053D5AC3FE", 00:18:42.319 "uuid": "18f86681-0943-49c8-9ea6-13053d5ac3fe" 00:18:42.319 }, 00:18:42.319 { 00:18:42.319 "nsid": 2, 00:18:42.319 "bdev_name": "Malloc3", 00:18:42.319 "name": "Malloc3", 00:18:42.319 "nguid": "C9F3B4AB642D44C1871FC4085B7AB566", 00:18:42.319 "uuid": "c9f3b4ab-642d-44c1-871f-c4085b7ab566" 00:18:42.319 } 00:18:42.319 ] 00:18:42.319 }, 00:18:42.319 { 00:18:42.319 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:42.319 "subtype": "NVMe", 00:18:42.319 "listen_addresses": [ 00:18:42.319 { 00:18:42.319 "trtype": "VFIOUSER", 00:18:42.319 "adrfam": "IPv4", 00:18:42.319 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:42.319 "trsvcid": "0" 00:18:42.319 } 00:18:42.319 ], 00:18:42.319 "allow_any_host": true, 00:18:42.319 "hosts": [], 00:18:42.319 "serial_number": "SPDK2", 00:18:42.319 "model_number": "SPDK bdev Controller", 00:18:42.319 "max_namespaces": 32, 00:18:42.319 "min_cntlid": 1, 00:18:42.319 "max_cntlid": 65519, 00:18:42.319 "namespaces": [ 00:18:42.319 { 00:18:42.319 "nsid": 1, 00:18:42.319 "bdev_name": "Malloc2", 00:18:42.319 "name": "Malloc2", 00:18:42.319 "nguid": "CC06231CB006426E92E82062E31244A1", 00:18:42.319 "uuid": "cc06231c-b006-426e-92e8-2062e31244a1" 00:18:42.319 }, 00:18:42.319 { 00:18:42.319 "nsid": 2, 00:18:42.319 "bdev_name": "Malloc4", 00:18:42.319 "name": "Malloc4", 00:18:42.319 "nguid": "E77F39C2E53F48D8B5100D47B3AAD333", 00:18:42.319 "uuid": "e77f39c2-e53f-48d8-b510-0d47b3aad333" 00:18:42.319 } 00:18:42.319 ] 00:18:42.319 } 00:18:42.319 ] 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4086540 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4081019 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4081019 ']' 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4081019 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4081019 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4081019' 00:18:42.319 killing process with pid 4081019 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4081019 00:18:42.319 08:14:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4081019 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4086703 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4086703' 00:18:42.885 Process pid: 4086703 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4086703 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4086703 ']' 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.885 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:42.885 [2024-07-21 08:14:52.319651] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:42.885 [2024-07-21 08:14:52.320785] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:18:42.885 [2024-07-21 08:14:52.320848] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.885 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.885 [2024-07-21 08:14:52.378718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.885 [2024-07-21 08:14:52.464493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.885 [2024-07-21 08:14:52.464547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.885 [2024-07-21 08:14:52.464560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.885 [2024-07-21 08:14:52.464571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.885 [2024-07-21 08:14:52.464580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.885 [2024-07-21 08:14:52.464705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.885 [2024-07-21 08:14:52.464770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.885 [2024-07-21 08:14:52.464836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.885 [2024-07-21 08:14:52.464839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.145 [2024-07-21 08:14:52.565887] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:43.145 [2024-07-21 08:14:52.566067] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:43.145 [2024-07-21 08:14:52.566331] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:43.145 [2024-07-21 08:14:52.566891] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:43.145 [2024-07-21 08:14:52.567145] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:43.145 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.145 08:14:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:18:43.145 08:14:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:44.082 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:44.340 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:44.340 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:44.340 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:44.340 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:44.340 08:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:44.598 Malloc1 00:18:44.598 08:14:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:44.857 08:14:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:45.427 08:14:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:45.427 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:45.428 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:45.428 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:45.686 Malloc2 00:18:45.944 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:45.944 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:46.202 08:14:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4086703 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4086703 ']' 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4086703 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.461 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4086703 00:18:46.720 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.720 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.720 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4086703' 00:18:46.720 killing process with pid 4086703 00:18:46.720 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4086703 00:18:46.720 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4086703 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:46.980 00:18:46.980 real 0m52.488s 00:18:46.980 user 3m26.855s 00:18:46.980 sys 0m4.395s 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.980 ************************************ 00:18:46.980 END TEST nvmf_vfio_user 00:18:46.980 ************************************ 00:18:46.980 08:14:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:46.980 08:14:56 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:46.980 08:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:46.980 08:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.980 08:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.980 ************************************ 00:18:46.980 START TEST nvmf_vfio_user_nvme_compliance 00:18:46.980 ************************************ 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:46.980 * Looking for test storage... 00:18:46.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4087230 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4087230' 00:18:46.980 Process pid: 4087230 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4087230 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4087230 ']' 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.980 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:46.980 [2024-07-21 08:14:56.584523] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:18:46.980 [2024-07-21 08:14:56.584598] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.238 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.238 [2024-07-21 08:14:56.642660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:47.238 [2024-07-21 08:14:56.729653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.238 [2024-07-21 08:14:56.729699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.238 [2024-07-21 08:14:56.729714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.238 [2024-07-21 08:14:56.729727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.238 [2024-07-21 08:14:56.729737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.238 [2024-07-21 08:14:56.729804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.238 [2024-07-21 08:14:56.729837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.238 [2024-07-21 08:14:56.729839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.238 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.238 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:18:47.238 08:14:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 malloc0 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.622 08:14:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:48.622 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.622 00:18:48.622 00:18:48.622 CUnit - A unit testing framework for C - Version 2.1-3 00:18:48.622 http://cunit.sourceforge.net/ 00:18:48.622 00:18:48.622 00:18:48.622 Suite: nvme_compliance 00:18:48.622 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-21 08:14:58.074145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.622 [2024-07-21 08:14:58.075555] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:48.622 [2024-07-21 08:14:58.075580] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:48.622 [2024-07-21 08:14:58.075606] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:48.622 [2024-07-21 08:14:58.077160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.622 passed 00:18:48.623 Test: admin_identify_ctrlr_verify_fused ...[2024-07-21 08:14:58.162744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.623 [2024-07-21 08:14:58.165767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.623 passed 00:18:48.623 Test: admin_identify_ns ...[2024-07-21 08:14:58.250086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.881 [2024-07-21 08:14:58.311634] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:48.881 [2024-07-21 08:14:58.319633] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:48.881 [2024-07-21 08:14:58.340772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.881 passed 00:18:48.881 Test: admin_get_features_mandatory_features ...[2024-07-21 08:14:58.421430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.881 [2024-07-21 08:14:58.426462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.881 passed 00:18:48.881 Test: admin_get_features_optional_features ...[2024-07-21 08:14:58.509035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.154 [2024-07-21 08:14:58.514074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.154 passed 00:18:49.154 Test: admin_set_features_number_of_queues ...[2024-07-21 08:14:58.595070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.154 [2024-07-21 08:14:58.699729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.154 passed 00:18:49.154 Test: admin_get_log_page_mandatory_logs ...[2024-07-21 08:14:58.783798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.411 [2024-07-21 08:14:58.786819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.411 passed 00:18:49.411 Test: admin_get_log_page_with_lpo ...[2024-07-21 08:14:58.869129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.411 [2024-07-21 08:14:58.936639] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:49.411 [2024-07-21 08:14:58.949712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.411 passed 00:18:49.411 Test: fabric_property_get ...[2024-07-21 08:14:59.033794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.411 [2024-07-21 08:14:59.035075] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:49.411 [2024-07-21 08:14:59.036815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.670 passed 00:18:49.670 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-21 08:14:59.121370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.670 [2024-07-21 08:14:59.122689] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:49.670 [2024-07-21 08:14:59.124390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.670 passed 00:18:49.670 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-21 08:14:59.205615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.670 [2024-07-21 08:14:59.291624] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.928 [2024-07-21 08:14:59.307637] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.928 [2024-07-21 08:14:59.312728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.928 passed 00:18:49.928 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-21 08:14:59.396239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.928 [2024-07-21 08:14:59.397531] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:49.928 [2024-07-21 08:14:59.399260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.928 passed 00:18:49.928 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-21 08:14:59.482122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.928 [2024-07-21 08:14:59.557637] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:50.185 [2024-07-21 08:14:59.581624] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:50.185 [2024-07-21 08:14:59.586726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:50.185 passed 00:18:50.185 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-21 08:14:59.670256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:50.185 [2024-07-21 08:14:59.671549] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:50.185 [2024-07-21 08:14:59.671586] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:50.185 [2024-07-21 08:14:59.673276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:50.185 passed 00:18:50.185 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-21 08:14:59.756133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:50.443 [2024-07-21 08:14:59.846693] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:50.443 [2024-07-21 08:14:59.855627] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:50.443 [2024-07-21 08:14:59.863627] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:50.443 [2024-07-21 08:14:59.871626] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:50.443 [2024-07-21 08:14:59.900724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:50.443 passed 00:18:50.443 Test: admin_create_io_sq_verify_pc ...[2024-07-21 08:14:59.984271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:50.443 [2024-07-21 08:15:00.000633] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:50.443 [2024-07-21 08:15:00.018611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:50.443 passed 00:18:50.701 Test: admin_create_io_qp_max_qps ...[2024-07-21 08:15:00.102259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.637 [2024-07-21 08:15:01.191629] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:52.204 [2024-07-21 08:15:01.570370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.204 passed 00:18:52.204 Test: admin_create_io_sq_shared_cq ...[2024-07-21 08:15:01.652679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.204 [2024-07-21 08:15:01.786640] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:52.204 [2024-07-21 08:15:01.823718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.463 passed 00:18:52.463 00:18:52.463 Run Summary: Type Total Ran Passed Failed Inactive 00:18:52.463 suites 1 1 n/a 0 0 00:18:52.463 tests 18 18 18 0 0 00:18:52.463 asserts 360 360 360 0 n/a 00:18:52.463 00:18:52.463 Elapsed time = 1.552 seconds 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4087230 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4087230 ']' 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4087230 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4087230 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4087230' 00:18:52.463 killing process with pid 4087230 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4087230 00:18:52.463 08:15:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4087230 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:52.728 00:18:52.728 real 0m5.700s 00:18:52.728 user 0m15.997s 00:18:52.728 sys 0m0.564s 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 ************************************ 00:18:52.728 END TEST nvmf_vfio_user_nvme_compliance 00:18:52.728 ************************************ 00:18:52.728 08:15:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:52.728 08:15:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:52.728 08:15:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.728 08:15:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.728 08:15:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.728 ************************************ 00:18:52.728 START TEST nvmf_vfio_user_fuzz 00:18:52.728 ************************************ 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:52.728 * Looking for test storage... 00:18:52.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.728 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4088066 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4088066' 00:18:52.729 Process pid: 4088066 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4088066 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4088066 ']' 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.729 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:52.989 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.989 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:52.989 08:15:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:54.366 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 malloc0 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:54.367 08:15:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:26.460 Fuzzing completed. Shutting down the fuzz application 00:19:26.460 00:19:26.460 Dumping successful admin opcodes: 00:19:26.460 8, 9, 10, 24, 00:19:26.460 Dumping successful io opcodes: 00:19:26.460 0, 00:19:26.460 NS: 0x200003a1ef00 I/O qp, Total commands completed: 603696, total successful commands: 2330, random_seed: 1245875776 00:19:26.460 NS: 0x200003a1ef00 admin qp, Total commands completed: 100147, total successful commands: 821, random_seed: 654507904 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4088066 ']' 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4088066' 00:19:26.460 killing process with pid 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4088066 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:26.460 00:19:26.460 real 0m32.240s 00:19:26.460 user 0m31.062s 00:19:26.460 sys 0m30.528s 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:26.460 08:15:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:26.460 ************************************ 00:19:26.460 END TEST nvmf_vfio_user_fuzz 00:19:26.460 ************************************ 00:19:26.460 08:15:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:26.460 08:15:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:26.460 08:15:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:26.460 08:15:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.460 08:15:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:26.460 ************************************ 00:19:26.460 START TEST nvmf_host_management 00:19:26.460 ************************************ 00:19:26.460 08:15:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:26.461 * Looking for test storage... 00:19:26.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.461 08:15:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:27.026 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:27.026 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:27.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:27.026 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.026 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.027 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:27.285 00:19:27.285 --- 10.0.0.2 ping statistics --- 00:19:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.285 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:19:27.285 00:19:27.285 --- 10.0.0.1 ping statistics --- 00:19:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.285 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4094011 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4094011 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4094011 ']' 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.285 08:15:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.285 [2024-07-21 08:15:36.742305] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:27.285 [2024-07-21 08:15:36.742401] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.285 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.285 [2024-07-21 08:15:36.810938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.285 [2024-07-21 08:15:36.906164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.285 [2024-07-21 08:15:36.906225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.285 [2024-07-21 08:15:36.906249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.285 [2024-07-21 08:15:36.906263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.285 [2024-07-21 08:15:36.906275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.285 [2024-07-21 08:15:36.906357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.285 [2024-07-21 08:15:36.906427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.285 [2024-07-21 08:15:36.906479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:27.285 [2024-07-21 08:15:36.906481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 [2024-07-21 08:15:37.065377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 Malloc0 00:19:27.543 [2024-07-21 08:15:37.125451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4094173 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4094173 /var/tmp/bdevperf.sock 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4094173 ']' 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.543 { 00:19:27.543 "params": { 00:19:27.543 "name": "Nvme$subsystem", 00:19:27.543 "trtype": "$TEST_TRANSPORT", 00:19:27.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.543 "adrfam": "ipv4", 00:19:27.543 "trsvcid": "$NVMF_PORT", 00:19:27.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.543 "hdgst": ${hdgst:-false}, 00:19:27.543 "ddgst": ${ddgst:-false} 00:19:27.543 }, 00:19:27.543 "method": "bdev_nvme_attach_controller" 00:19:27.543 } 00:19:27.543 EOF 00:19:27.543 )") 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:27.543 08:15:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.543 "params": { 00:19:27.543 "name": "Nvme0", 00:19:27.543 "trtype": "tcp", 00:19:27.543 "traddr": "10.0.0.2", 00:19:27.543 "adrfam": "ipv4", 00:19:27.543 "trsvcid": "4420", 00:19:27.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:27.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:27.543 "hdgst": false, 00:19:27.543 "ddgst": false 00:19:27.543 }, 00:19:27.543 "method": "bdev_nvme_attach_controller" 00:19:27.543 }' 00:19:27.803 [2024-07-21 08:15:37.203277] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:27.803 [2024-07-21 08:15:37.203370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094173 ] 00:19:27.803 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.803 [2024-07-21 08:15:37.262826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.803 [2024-07-21 08:15:37.349845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.368 Running I/O for 10 seconds... 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:19:28.368 08:15:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:19:28.627 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.628 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:28.628 [2024-07-21 08:15:38.076621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.076985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27d20 is same with the state(5) to be set 00:19:28.628 [2024-07-21 08:15:38.077644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.628 [2024-07-21 08:15:38.077885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.628 [2024-07-21 08:15:38.077900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.077924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.077938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.077955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.077969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.078980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.629 [2024-07-21 08:15:38.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.629 [2024-07-21 08:15:38.079303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.630 [2024-07-21 08:15:38.079762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e420 is same with the state(5) to be set 00:19:28.630 [2024-07-21 08:15:38.079853] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf7e420 was disconnected and freed. reset controller. 00:19:28.630 [2024-07-21 08:15:38.079927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.630 [2024-07-21 08:15:38.079949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.630 [2024-07-21 08:15:38.079982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.079997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.630 [2024-07-21 08:15:38.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.080027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.630 [2024-07-21 08:15:38.080046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.630 [2024-07-21 08:15:38.080061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf84000 is same with the state(5) to be set 00:19:28.630 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.630 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:28.630 [2024-07-21 08:15:38.081260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.630 r 00:19:28.630 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:28.630 task offset: 73728 on job bdev=Nvme0n1 fails 00:19:28.630 00:19:28.630 Latency(us) 00:19:28.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.630 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.630 Job: Nvme0n1 ended in about 0.39 seconds with error 00:19:28.630 Verification LBA range: start 0x0 length 0x400 00:19:28.630 Nvme0n1 : 0.39 1484.05 92.75 164.89 0.00 37670.23 7039.05 33787.45 00:19:28.630 =================================================================================================================== 00:19:28.630 Total : 1484.05 92.75 164.89 0.00 37670.23 7039.05 33787.45 00:19:28.630 [2024-07-21 08:15:38.083315] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:28.630 [2024-07-21 08:15:38.083354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf84000 (9): Bad file descriptor 00:19:28.630 08:15:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.630 08:15:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:19:28.630 [2024-07-21 08:15:38.090742] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:29.567 08:15:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4094173 00:19:29.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4094173) - No such process 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:29.568 { 00:19:29.568 "params": { 00:19:29.568 "name": "Nvme$subsystem", 00:19:29.568 "trtype": "$TEST_TRANSPORT", 00:19:29.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.568 "adrfam": "ipv4", 00:19:29.568 "trsvcid": "$NVMF_PORT", 00:19:29.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.568 "hdgst": ${hdgst:-false}, 00:19:29.568 "ddgst": ${ddgst:-false} 00:19:29.568 }, 00:19:29.568 "method": "bdev_nvme_attach_controller" 00:19:29.568 } 00:19:29.568 EOF 00:19:29.568 )") 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:29.568 08:15:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:29.568 "params": { 00:19:29.568 "name": "Nvme0", 00:19:29.568 "trtype": "tcp", 00:19:29.568 "traddr": "10.0.0.2", 00:19:29.568 "adrfam": "ipv4", 00:19:29.568 "trsvcid": "4420", 00:19:29.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:29.568 "hdgst": false, 00:19:29.568 "ddgst": false 00:19:29.568 }, 00:19:29.568 "method": "bdev_nvme_attach_controller" 00:19:29.568 }' 00:19:29.568 [2024-07-21 08:15:39.137408] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:29.568 [2024-07-21 08:15:39.137508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094329 ] 00:19:29.568 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.826 [2024-07-21 08:15:39.198498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.826 [2024-07-21 08:15:39.283461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.085 Running I/O for 1 seconds... 00:19:31.468 00:19:31.468 Latency(us) 00:19:31.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.468 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:31.468 Verification LBA range: start 0x0 length 0x400 00:19:31.468 Nvme0n1 : 1.03 1677.01 104.81 0.00 0.00 37545.65 5873.97 33010.73 00:19:31.468 =================================================================================================================== 00:19:31.468 Total : 1677.01 104.81 0.00 0.00 37545.65 5873.97 33010.73 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.468 rmmod nvme_tcp 00:19:31.468 rmmod nvme_fabrics 00:19:31.468 rmmod nvme_keyring 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4094011 ']' 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4094011 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4094011 ']' 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4094011 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4094011 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4094011' 00:19:31.468 killing process with pid 4094011 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4094011 00:19:31.468 08:15:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4094011 00:19:31.727 [2024-07-21 08:15:41.147385] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.727 08:15:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.628 08:15:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.628 08:15:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:33.628 00:19:33.628 real 0m8.712s 00:19:33.628 user 0m20.142s 00:19:33.628 sys 0m2.564s 00:19:33.628 08:15:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.628 08:15:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:33.628 ************************************ 00:19:33.628 END TEST nvmf_host_management 00:19:33.628 ************************************ 00:19:33.628 08:15:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:33.628 08:15:43 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:33.628 08:15:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:33.628 08:15:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.628 08:15:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 ************************************ 00:19:33.885 START TEST nvmf_lvol 00:19:33.885 ************************************ 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:33.885 * Looking for test storage... 00:19:33.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.885 08:15:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.782 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.783 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:19:36.059 00:19:36.059 --- 10.0.0.2 ping statistics --- 00:19:36.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.059 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:36.059 00:19:36.059 --- 10.0.0.1 ping statistics --- 00:19:36.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.059 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4096533 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4096533 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4096533 ']' 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.059 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:36.059 [2024-07-21 08:15:45.538171] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:36.059 [2024-07-21 08:15:45.538242] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.059 [2024-07-21 08:15:45.605587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.318 [2024-07-21 08:15:45.697107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.318 [2024-07-21 08:15:45.697165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.318 [2024-07-21 08:15:45.697191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.318 [2024-07-21 08:15:45.697212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.318 [2024-07-21 08:15:45.697224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.318 [2024-07-21 08:15:45.697293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.318 [2024-07-21 08:15:45.697346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.318 [2024-07-21 08:15:45.697350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.318 08:15:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:36.578 [2024-07-21 08:15:46.062350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.578 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:36.837 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:36.837 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.095 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:37.095 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:37.353 08:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:37.610 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e0e5e644-126f-42b7-94d0-579e0132ce75 00:19:37.610 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0e5e644-126f-42b7-94d0-579e0132ce75 lvol 20 00:19:37.876 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1627b45b-0501-4dc2-8279-fe4f94bed6a1 00:19:37.876 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:38.134 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1627b45b-0501-4dc2-8279-fe4f94bed6a1 00:19:38.390 08:15:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:38.648 [2024-07-21 08:15:48.137354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.648 08:15:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:38.906 08:15:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4096955 00:19:38.906 08:15:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:38.906 08:15:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:38.906 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.841 08:15:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1627b45b-0501-4dc2-8279-fe4f94bed6a1 MY_SNAPSHOT 00:19:40.407 08:15:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0d78946a-b5ab-4b61-8fd3-ce6305417675 00:19:40.407 08:15:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1627b45b-0501-4dc2-8279-fe4f94bed6a1 30 00:19:40.664 08:15:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0d78946a-b5ab-4b61-8fd3-ce6305417675 MY_CLONE 00:19:40.921 08:15:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a8d4b33-c129-40c7-bb4a-30159974e069 00:19:40.921 08:15:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0a8d4b33-c129-40c7-bb4a-30159974e069 00:19:41.488 08:15:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4096955 00:19:49.601 Initializing NVMe Controllers 00:19:49.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:49.601 Controller IO queue size 128, less than required. 00:19:49.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:49.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:49.601 Initialization complete. Launching workers. 00:19:49.601 ======================================================== 00:19:49.601 Latency(us) 00:19:49.601 Device Information : IOPS MiB/s Average min max 00:19:49.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10689.70 41.76 11973.89 1480.24 70564.98 00:19:49.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10551.30 41.22 12135.14 2006.40 74573.52 00:19:49.601 ======================================================== 00:19:49.601 Total : 21241.00 82.97 12053.99 1480.24 74573.52 00:19:49.601 00:19:49.601 08:15:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:49.601 08:15:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1627b45b-0501-4dc2-8279-fe4f94bed6a1 00:19:49.860 08:15:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0e5e644-126f-42b7-94d0-579e0132ce75 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.120 rmmod nvme_tcp 00:19:50.120 rmmod nvme_fabrics 00:19:50.120 rmmod nvme_keyring 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4096533 ']' 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4096533 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4096533 ']' 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4096533 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096533 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096533' 00:19:50.120 killing process with pid 4096533 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4096533 00:19:50.120 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4096533 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.378 08:15:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.911 00:19:52.911 real 0m18.745s 00:19:52.911 user 1m4.197s 00:19:52.911 sys 0m5.571s 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 ************************************ 00:19:52.911 END TEST nvmf_lvol 00:19:52.911 ************************************ 00:19:52.911 08:16:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:52.911 08:16:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:52.911 08:16:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:52.911 08:16:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.911 08:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 ************************************ 00:19:52.911 START TEST nvmf_lvs_grow 00:19:52.911 ************************************ 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:52.911 * Looking for test storage... 00:19:52.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.911 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.912 08:16:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:54.807 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.807 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.807 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.807 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.807 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:54.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:54.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:54.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:54.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:19:54.808 00:19:54.808 --- 10.0.0.2 ping statistics --- 00:19:54.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.808 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:19:54.808 00:19:54.808 --- 10.0.0.1 ping statistics --- 00:19:54.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.808 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4100181 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4100181 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4100181 ']' 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.808 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:54.808 [2024-07-21 08:16:04.292012] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:54.808 [2024-07-21 08:16:04.292097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.808 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.808 [2024-07-21 08:16:04.354475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.065 [2024-07-21 08:16:04.451704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.065 [2024-07-21 08:16:04.451770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.065 [2024-07-21 08:16:04.451786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.065 [2024-07-21 08:16:04.451799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.065 [2024-07-21 08:16:04.451811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.065 [2024-07-21 08:16:04.451847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.065 08:16:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:55.322 [2024-07-21 08:16:04.853404] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:55.322 ************************************ 00:19:55.322 START TEST lvs_grow_clean 00:19:55.322 ************************************ 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:55.322 08:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:55.578 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:55.579 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:55.836 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:19:55.836 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:19:55.836 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:56.093 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:56.093 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:56.093 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 lvol 150 00:19:56.350 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e407e24c-4d83-483a-80f6-41099b9bd288 00:19:56.350 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:56.350 08:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:56.638 [2024-07-21 08:16:06.190880] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:56.638 [2024-07-21 08:16:06.191011] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:56.638 true 00:19:56.638 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:19:56.638 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:56.896 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:56.896 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:57.153 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e407e24c-4d83-483a-80f6-41099b9bd288 00:19:57.410 08:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:57.666 [2024-07-21 08:16:07.230088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.666 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4100533 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4100533 /var/tmp/bdevperf.sock 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4100533 ']' 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.924 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:57.924 [2024-07-21 08:16:07.529712] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:19:57.924 [2024-07-21 08:16:07.529781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100533 ] 00:19:58.180 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.180 [2024-07-21 08:16:07.592194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.180 [2024-07-21 08:16:07.685284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.180 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.180 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:19:58.180 08:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:58.744 Nvme0n1 00:19:58.744 08:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:59.000 [ 00:19:59.000 { 00:19:59.000 "name": "Nvme0n1", 00:19:59.000 "aliases": [ 00:19:59.000 "e407e24c-4d83-483a-80f6-41099b9bd288" 00:19:59.000 ], 00:19:59.000 "product_name": "NVMe disk", 00:19:59.000 "block_size": 4096, 00:19:59.000 "num_blocks": 38912, 00:19:59.000 "uuid": "e407e24c-4d83-483a-80f6-41099b9bd288", 00:19:59.000 "assigned_rate_limits": { 00:19:59.000 "rw_ios_per_sec": 0, 00:19:59.000 "rw_mbytes_per_sec": 0, 00:19:59.000 "r_mbytes_per_sec": 0, 00:19:59.000 "w_mbytes_per_sec": 0 00:19:59.000 }, 00:19:59.000 "claimed": false, 00:19:59.000 "zoned": false, 00:19:59.000 "supported_io_types": { 00:19:59.000 "read": true, 00:19:59.000 "write": true, 00:19:59.000 "unmap": true, 00:19:59.000 "flush": true, 00:19:59.000 "reset": true, 00:19:59.000 "nvme_admin": true, 00:19:59.000 "nvme_io": true, 00:19:59.000 "nvme_io_md": false, 00:19:59.000 "write_zeroes": true, 00:19:59.000 "zcopy": false, 00:19:59.000 "get_zone_info": false, 00:19:59.000 "zone_management": false, 00:19:59.000 "zone_append": false, 00:19:59.000 "compare": true, 00:19:59.000 "compare_and_write": true, 00:19:59.000 "abort": true, 00:19:59.000 "seek_hole": false, 00:19:59.000 "seek_data": false, 00:19:59.000 "copy": true, 00:19:59.000 "nvme_iov_md": false 00:19:59.000 }, 00:19:59.000 "memory_domains": [ 00:19:59.000 { 00:19:59.000 "dma_device_id": "system", 00:19:59.000 "dma_device_type": 1 00:19:59.000 } 00:19:59.000 ], 00:19:59.000 "driver_specific": { 00:19:59.000 "nvme": [ 00:19:59.000 { 00:19:59.000 "trid": { 00:19:59.000 "trtype": "TCP", 00:19:59.000 "adrfam": "IPv4", 00:19:59.000 "traddr": "10.0.0.2", 00:19:59.000 "trsvcid": "4420", 00:19:59.000 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:59.000 }, 00:19:59.000 "ctrlr_data": { 00:19:59.000 "cntlid": 1, 00:19:59.000 "vendor_id": "0x8086", 00:19:59.000 "model_number": "SPDK bdev Controller", 00:19:59.000 "serial_number": "SPDK0", 00:19:59.000 "firmware_revision": "24.09", 00:19:59.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.000 "oacs": { 00:19:59.000 "security": 0, 00:19:59.000 "format": 0, 00:19:59.000 "firmware": 0, 00:19:59.000 "ns_manage": 0 00:19:59.000 }, 00:19:59.000 "multi_ctrlr": true, 00:19:59.000 "ana_reporting": false 00:19:59.000 }, 00:19:59.000 "vs": { 00:19:59.000 "nvme_version": "1.3" 00:19:59.000 }, 00:19:59.000 "ns_data": { 00:19:59.000 "id": 1, 00:19:59.000 "can_share": true 00:19:59.000 } 00:19:59.000 } 00:19:59.000 ], 00:19:59.000 "mp_policy": "active_passive" 00:19:59.000 } 00:19:59.000 } 00:19:59.000 ] 00:19:59.000 08:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4100668 00:19:59.000 08:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.000 08:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:59.256 Running I/O for 10 seconds... 00:20:00.186 Latency(us) 00:20:00.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:00.186 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:20:00.186 =================================================================================================================== 00:20:00.186 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:20:00.186 00:20:01.116 08:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:01.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.116 Nvme0n1 : 2.00 14545.50 56.82 0.00 0.00 0.00 0.00 0.00 00:20:01.116 =================================================================================================================== 00:20:01.116 Total : 14545.50 56.82 0.00 0.00 0.00 0.00 0.00 00:20:01.116 00:20:01.372 true 00:20:01.372 08:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:01.372 08:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:01.629 08:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:01.629 08:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:01.629 08:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4100668 00:20:02.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.192 Nvme0n1 : 3.00 14607.67 57.06 0.00 0.00 0.00 0.00 0.00 00:20:02.192 =================================================================================================================== 00:20:02.192 Total : 14607.67 57.06 0.00 0.00 0.00 0.00 0.00 00:20:02.192 00:20:03.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.124 Nvme0n1 : 4.00 14734.00 57.55 0.00 0.00 0.00 0.00 0.00 00:20:03.124 =================================================================================================================== 00:20:03.124 Total : 14734.00 57.55 0.00 0.00 0.00 0.00 0.00 00:20:03.124 00:20:04.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:04.492 Nvme0n1 : 5.00 14733.60 57.55 0.00 0.00 0.00 0.00 0.00 00:20:04.493 =================================================================================================================== 00:20:04.493 Total : 14733.60 57.55 0.00 0.00 0.00 0.00 0.00 00:20:04.493 00:20:05.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:05.424 Nvme0n1 : 6.00 14754.50 57.63 0.00 0.00 0.00 0.00 0.00 00:20:05.424 =================================================================================================================== 00:20:05.424 Total : 14754.50 57.63 0.00 0.00 0.00 0.00 0.00 00:20:05.424 00:20:06.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.355 Nvme0n1 : 7.00 14842.00 57.98 0.00 0.00 0.00 0.00 0.00 00:20:06.355 =================================================================================================================== 00:20:06.355 Total : 14842.00 57.98 0.00 0.00 0.00 0.00 0.00 00:20:06.355 00:20:07.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.284 Nvme0n1 : 8.00 14844.12 57.98 0.00 0.00 0.00 0.00 0.00 00:20:07.284 =================================================================================================================== 00:20:07.284 Total : 14844.12 57.98 0.00 0.00 0.00 0.00 0.00 00:20:07.284 00:20:08.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.215 Nvme0n1 : 9.00 14852.89 58.02 0.00 0.00 0.00 0.00 0.00 00:20:08.215 =================================================================================================================== 00:20:08.215 Total : 14852.89 58.02 0.00 0.00 0.00 0.00 0.00 00:20:08.215 00:20:09.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.147 Nvme0n1 : 10.00 14910.60 58.24 0.00 0.00 0.00 0.00 0.00 00:20:09.147 =================================================================================================================== 00:20:09.147 Total : 14910.60 58.24 0.00 0.00 0.00 0.00 0.00 00:20:09.147 00:20:09.147 00:20:09.147 Latency(us) 00:20:09.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.147 Nvme0n1 : 10.00 14918.57 58.28 0.00 0.00 8574.99 5194.33 17379.18 00:20:09.147 =================================================================================================================== 00:20:09.147 Total : 14918.57 58.28 0.00 0.00 8574.99 5194.33 17379.18 00:20:09.147 0 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4100533 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4100533 ']' 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4100533 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100533 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100533' 00:20:09.147 killing process with pid 4100533 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4100533 00:20:09.147 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.147 00:20:09.147 Latency(us) 00:20:09.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.147 =================================================================================================================== 00:20:09.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.147 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4100533 00:20:09.404 08:16:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:09.662 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:10.227 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:10.227 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:10.227 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:10.227 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:20:10.227 08:16:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:10.485 [2024-07-21 08:16:20.069434] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:10.748 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:10.748 request: 00:20:10.748 { 00:20:10.748 "uuid": "383c7d7e-fe6c-489d-936b-dcc022fbdc70", 00:20:10.748 "method": "bdev_lvol_get_lvstores", 00:20:10.748 "req_id": 1 00:20:10.748 } 00:20:10.748 Got JSON-RPC error response 00:20:10.748 response: 00:20:10.748 { 00:20:10.748 "code": -19, 00:20:10.748 "message": "No such device" 00:20:10.748 } 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:11.035 aio_bdev 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e407e24c-4d83-483a-80f6-41099b9bd288 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e407e24c-4d83-483a-80f6-41099b9bd288 00:20:11.035 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:11.036 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:20:11.036 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:11.036 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:11.036 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:11.296 08:16:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e407e24c-4d83-483a-80f6-41099b9bd288 -t 2000 00:20:11.554 [ 00:20:11.554 { 00:20:11.554 "name": "e407e24c-4d83-483a-80f6-41099b9bd288", 00:20:11.554 "aliases": [ 00:20:11.554 "lvs/lvol" 00:20:11.554 ], 00:20:11.554 "product_name": "Logical Volume", 00:20:11.554 "block_size": 4096, 00:20:11.554 "num_blocks": 38912, 00:20:11.554 "uuid": "e407e24c-4d83-483a-80f6-41099b9bd288", 00:20:11.554 "assigned_rate_limits": { 00:20:11.554 "rw_ios_per_sec": 0, 00:20:11.554 "rw_mbytes_per_sec": 0, 00:20:11.554 "r_mbytes_per_sec": 0, 00:20:11.554 "w_mbytes_per_sec": 0 00:20:11.554 }, 00:20:11.554 "claimed": false, 00:20:11.554 "zoned": false, 00:20:11.554 "supported_io_types": { 00:20:11.554 "read": true, 00:20:11.554 "write": true, 00:20:11.554 "unmap": true, 00:20:11.554 "flush": false, 00:20:11.554 "reset": true, 00:20:11.554 "nvme_admin": false, 00:20:11.554 "nvme_io": false, 00:20:11.554 "nvme_io_md": false, 00:20:11.554 "write_zeroes": true, 00:20:11.554 "zcopy": false, 00:20:11.554 "get_zone_info": false, 00:20:11.554 "zone_management": false, 00:20:11.554 "zone_append": false, 00:20:11.554 "compare": false, 00:20:11.554 "compare_and_write": false, 00:20:11.554 "abort": false, 00:20:11.554 "seek_hole": true, 00:20:11.554 "seek_data": true, 00:20:11.554 "copy": false, 00:20:11.554 "nvme_iov_md": false 00:20:11.554 }, 00:20:11.554 "driver_specific": { 00:20:11.554 "lvol": { 00:20:11.554 "lvol_store_uuid": "383c7d7e-fe6c-489d-936b-dcc022fbdc70", 00:20:11.554 "base_bdev": "aio_bdev", 00:20:11.554 "thin_provision": false, 00:20:11.554 "num_allocated_clusters": 38, 00:20:11.554 "snapshot": false, 00:20:11.554 "clone": false, 00:20:11.554 "esnap_clone": false 00:20:11.554 } 00:20:11.554 } 00:20:11.554 } 00:20:11.554 ] 00:20:11.554 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:20:11.554 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:11.554 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:11.812 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:11.812 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:11.812 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:12.070 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:12.070 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e407e24c-4d83-483a-80f6-41099b9bd288 00:20:12.327 08:16:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 383c7d7e-fe6c-489d-936b-dcc022fbdc70 00:20:12.584 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:12.841 00:20:12.841 real 0m17.499s 00:20:12.841 user 0m16.704s 00:20:12.841 sys 0m1.999s 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:12.841 ************************************ 00:20:12.841 END TEST lvs_grow_clean 00:20:12.841 ************************************ 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:12.841 ************************************ 00:20:12.841 START TEST lvs_grow_dirty 00:20:12.841 ************************************ 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:12.841 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:13.405 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:13.405 08:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:13.405 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=69f59e20-c66b-4e82-9658-eb736aad7626 00:20:13.661 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:13.661 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:13.661 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:13.661 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:13.661 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69f59e20-c66b-4e82-9658-eb736aad7626 lvol 150 00:20:13.917 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:13.918 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:13.918 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:14.175 [2024-07-21 08:16:23.762867] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:14.175 [2024-07-21 08:16:23.762982] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:14.175 true 00:20:14.175 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:14.175 08:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:14.432 08:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:14.432 08:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:14.689 08:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:14.946 08:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:15.202 [2024-07-21 08:16:24.785953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.202 08:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4102693 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4102693 /var/tmp/bdevperf.sock 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4102693 ']' 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.458 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:15.715 [2024-07-21 08:16:25.091643] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:15.715 [2024-07-21 08:16:25.091716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102693 ] 00:20:15.715 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.715 [2024-07-21 08:16:25.153955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.715 [2024-07-21 08:16:25.244901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.971 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.971 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:20:15.971 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:16.228 Nvme0n1 00:20:16.228 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:16.485 [ 00:20:16.485 { 00:20:16.485 "name": "Nvme0n1", 00:20:16.485 "aliases": [ 00:20:16.485 "9c62a1f3-4d60-44d0-814c-adee2c70ffb5" 00:20:16.485 ], 00:20:16.485 "product_name": "NVMe disk", 00:20:16.485 "block_size": 4096, 00:20:16.485 "num_blocks": 38912, 00:20:16.485 "uuid": "9c62a1f3-4d60-44d0-814c-adee2c70ffb5", 00:20:16.485 "assigned_rate_limits": { 00:20:16.485 "rw_ios_per_sec": 0, 00:20:16.485 "rw_mbytes_per_sec": 0, 00:20:16.485 "r_mbytes_per_sec": 0, 00:20:16.485 "w_mbytes_per_sec": 0 00:20:16.485 }, 00:20:16.485 "claimed": false, 00:20:16.485 "zoned": false, 00:20:16.485 "supported_io_types": { 00:20:16.485 "read": true, 00:20:16.485 "write": true, 00:20:16.485 "unmap": true, 00:20:16.485 "flush": true, 00:20:16.485 "reset": true, 00:20:16.485 "nvme_admin": true, 00:20:16.485 "nvme_io": true, 00:20:16.485 "nvme_io_md": false, 00:20:16.485 "write_zeroes": true, 00:20:16.485 "zcopy": false, 00:20:16.485 "get_zone_info": false, 00:20:16.485 "zone_management": false, 00:20:16.485 "zone_append": false, 00:20:16.485 "compare": true, 00:20:16.485 "compare_and_write": true, 00:20:16.485 "abort": true, 00:20:16.485 "seek_hole": false, 00:20:16.485 "seek_data": false, 00:20:16.485 "copy": true, 00:20:16.485 "nvme_iov_md": false 00:20:16.485 }, 00:20:16.485 "memory_domains": [ 00:20:16.485 { 00:20:16.485 "dma_device_id": "system", 00:20:16.485 "dma_device_type": 1 00:20:16.485 } 00:20:16.485 ], 00:20:16.485 "driver_specific": { 00:20:16.485 "nvme": [ 00:20:16.485 { 00:20:16.485 "trid": { 00:20:16.485 "trtype": "TCP", 00:20:16.485 "adrfam": "IPv4", 00:20:16.485 "traddr": "10.0.0.2", 00:20:16.485 "trsvcid": "4420", 00:20:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:16.485 }, 00:20:16.485 "ctrlr_data": { 00:20:16.485 "cntlid": 1, 00:20:16.485 "vendor_id": "0x8086", 00:20:16.485 "model_number": "SPDK bdev Controller", 00:20:16.485 "serial_number": "SPDK0", 00:20:16.485 "firmware_revision": "24.09", 00:20:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.485 "oacs": { 00:20:16.485 "security": 0, 00:20:16.485 "format": 0, 00:20:16.485 "firmware": 0, 00:20:16.485 "ns_manage": 0 00:20:16.485 }, 00:20:16.485 "multi_ctrlr": true, 00:20:16.485 "ana_reporting": false 00:20:16.485 }, 00:20:16.485 "vs": { 00:20:16.485 "nvme_version": "1.3" 00:20:16.485 }, 00:20:16.485 "ns_data": { 00:20:16.485 "id": 1, 00:20:16.485 "can_share": true 00:20:16.485 } 00:20:16.485 } 00:20:16.485 ], 00:20:16.485 "mp_policy": "active_passive" 00:20:16.485 } 00:20:16.485 } 00:20:16.485 ] 00:20:16.485 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4102830 00:20:16.485 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:16.485 08:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.485 Running I/O for 10 seconds... 00:20:17.858 Latency(us) 00:20:17.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:17.858 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:20:17.858 =================================================================================================================== 00:20:17.858 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:20:17.858 00:20:18.424 08:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:18.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:18.682 Nvme0n1 : 2.00 14605.50 57.05 0.00 0.00 0.00 0.00 0.00 00:20:18.682 =================================================================================================================== 00:20:18.682 Total : 14605.50 57.05 0.00 0.00 0.00 0.00 0.00 00:20:18.682 00:20:18.682 true 00:20:18.682 08:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:18.682 08:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:18.940 08:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:18.940 08:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:18.940 08:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4102830 00:20:19.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:19.504 Nvme0n1 : 3.00 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:20:19.504 =================================================================================================================== 00:20:19.504 Total : 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:20:19.504 00:20:20.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:20.875 Nvme0n1 : 4.00 14829.00 57.93 0.00 0.00 0.00 0.00 0.00 00:20:20.875 =================================================================================================================== 00:20:20.875 Total : 14829.00 57.93 0.00 0.00 0.00 0.00 0.00 00:20:20.875 00:20:21.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:21.807 Nvme0n1 : 5.00 14939.80 58.36 0.00 0.00 0.00 0.00 0.00 00:20:21.807 =================================================================================================================== 00:20:21.807 Total : 14939.80 58.36 0.00 0.00 0.00 0.00 0.00 00:20:21.807 00:20:22.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:22.738 Nvme0n1 : 6.00 14926.33 58.31 0.00 0.00 0.00 0.00 0.00 00:20:22.738 =================================================================================================================== 00:20:22.738 Total : 14926.33 58.31 0.00 0.00 0.00 0.00 0.00 00:20:22.738 00:20:23.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:23.667 Nvme0n1 : 7.00 15017.14 58.66 0.00 0.00 0.00 0.00 0.00 00:20:23.667 =================================================================================================================== 00:20:23.667 Total : 15017.14 58.66 0.00 0.00 0.00 0.00 0.00 00:20:23.667 00:20:24.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:24.599 Nvme0n1 : 8.00 15069.00 58.86 0.00 0.00 0.00 0.00 0.00 00:20:24.599 =================================================================================================================== 00:20:24.599 Total : 15069.00 58.86 0.00 0.00 0.00 0.00 0.00 00:20:24.599 00:20:25.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:25.569 Nvme0n1 : 9.00 15073.89 58.88 0.00 0.00 0.00 0.00 0.00 00:20:25.569 =================================================================================================================== 00:20:25.569 Total : 15073.89 58.88 0.00 0.00 0.00 0.00 0.00 00:20:25.569 00:20:26.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:26.500 Nvme0n1 : 10.00 15128.60 59.10 0.00 0.00 0.00 0.00 0.00 00:20:26.500 =================================================================================================================== 00:20:26.500 Total : 15128.60 59.10 0.00 0.00 0.00 0.00 0.00 00:20:26.500 00:20:26.500 00:20:26.500 Latency(us) 00:20:26.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:26.500 Nvme0n1 : 10.01 15132.70 59.11 0.00 0.00 8453.69 5048.70 18544.26 00:20:26.500 =================================================================================================================== 00:20:26.500 Total : 15132.70 59.11 0.00 0.00 8453.69 5048.70 18544.26 00:20:26.500 0 00:20:26.500 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4102693 00:20:26.501 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4102693 ']' 00:20:26.501 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4102693 00:20:26.501 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:20:26.501 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.501 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102693 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102693' 00:20:26.758 killing process with pid 4102693 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4102693 00:20:26.758 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.758 00:20:26.758 Latency(us) 00:20:26.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.758 =================================================================================================================== 00:20:26.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4102693 00:20:26.758 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.331 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.588 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:27.588 08:16:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4100181 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4100181 00:20:27.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4100181 Killed "${NVMF_APP[@]}" "$@" 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4104155 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4104155 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4104155 ']' 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.845 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:27.845 [2024-07-21 08:16:37.323989] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:27.845 [2024-07-21 08:16:37.324065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.845 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.845 [2024-07-21 08:16:37.386663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.845 [2024-07-21 08:16:37.469952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.845 [2024-07-21 08:16:37.470008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.845 [2024-07-21 08:16:37.470031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.845 [2024-07-21 08:16:37.470042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.845 [2024-07-21 08:16:37.470052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.845 [2024-07-21 08:16:37.470078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.103 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:28.361 [2024-07-21 08:16:37.833312] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:28.361 [2024-07-21 08:16:37.833459] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:28.361 [2024-07-21 08:16:37.833516] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.361 08:16:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:28.617 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 -t 2000 00:20:28.874 [ 00:20:28.874 { 00:20:28.874 "name": "9c62a1f3-4d60-44d0-814c-adee2c70ffb5", 00:20:28.874 "aliases": [ 00:20:28.874 "lvs/lvol" 00:20:28.874 ], 00:20:28.874 "product_name": "Logical Volume", 00:20:28.874 "block_size": 4096, 00:20:28.874 "num_blocks": 38912, 00:20:28.874 "uuid": "9c62a1f3-4d60-44d0-814c-adee2c70ffb5", 00:20:28.874 "assigned_rate_limits": { 00:20:28.874 "rw_ios_per_sec": 0, 00:20:28.874 "rw_mbytes_per_sec": 0, 00:20:28.874 "r_mbytes_per_sec": 0, 00:20:28.874 "w_mbytes_per_sec": 0 00:20:28.874 }, 00:20:28.874 "claimed": false, 00:20:28.874 "zoned": false, 00:20:28.874 "supported_io_types": { 00:20:28.874 "read": true, 00:20:28.874 "write": true, 00:20:28.874 "unmap": true, 00:20:28.874 "flush": false, 00:20:28.874 "reset": true, 00:20:28.874 "nvme_admin": false, 00:20:28.874 "nvme_io": false, 00:20:28.874 "nvme_io_md": false, 00:20:28.874 "write_zeroes": true, 00:20:28.874 "zcopy": false, 00:20:28.874 "get_zone_info": false, 00:20:28.874 "zone_management": false, 00:20:28.874 "zone_append": false, 00:20:28.874 "compare": false, 00:20:28.874 "compare_and_write": false, 00:20:28.874 "abort": false, 00:20:28.874 "seek_hole": true, 00:20:28.874 "seek_data": true, 00:20:28.874 "copy": false, 00:20:28.874 "nvme_iov_md": false 00:20:28.874 }, 00:20:28.874 "driver_specific": { 00:20:28.874 "lvol": { 00:20:28.874 "lvol_store_uuid": "69f59e20-c66b-4e82-9658-eb736aad7626", 00:20:28.874 "base_bdev": "aio_bdev", 00:20:28.874 "thin_provision": false, 00:20:28.874 "num_allocated_clusters": 38, 00:20:28.874 "snapshot": false, 00:20:28.874 "clone": false, 00:20:28.874 "esnap_clone": false 00:20:28.874 } 00:20:28.874 } 00:20:28.874 } 00:20:28.874 ] 00:20:28.874 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:28.874 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:28.874 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:29.133 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:29.133 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:29.133 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:29.390 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:29.390 08:16:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:29.647 [2024-07-21 08:16:39.074184] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:29.647 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:29.904 request: 00:20:29.904 { 00:20:29.904 "uuid": "69f59e20-c66b-4e82-9658-eb736aad7626", 00:20:29.904 "method": "bdev_lvol_get_lvstores", 00:20:29.904 "req_id": 1 00:20:29.904 } 00:20:29.904 Got JSON-RPC error response 00:20:29.904 response: 00:20:29.904 { 00:20:29.904 "code": -19, 00:20:29.904 "message": "No such device" 00:20:29.904 } 00:20:29.904 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:20:29.904 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.904 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.904 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.904 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:30.160 aio_bdev 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:30.160 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:30.418 08:16:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 -t 2000 00:20:30.675 [ 00:20:30.675 { 00:20:30.675 "name": "9c62a1f3-4d60-44d0-814c-adee2c70ffb5", 00:20:30.675 "aliases": [ 00:20:30.675 "lvs/lvol" 00:20:30.675 ], 00:20:30.675 "product_name": "Logical Volume", 00:20:30.675 "block_size": 4096, 00:20:30.675 "num_blocks": 38912, 00:20:30.675 "uuid": "9c62a1f3-4d60-44d0-814c-adee2c70ffb5", 00:20:30.675 "assigned_rate_limits": { 00:20:30.675 "rw_ios_per_sec": 0, 00:20:30.675 "rw_mbytes_per_sec": 0, 00:20:30.675 "r_mbytes_per_sec": 0, 00:20:30.675 "w_mbytes_per_sec": 0 00:20:30.675 }, 00:20:30.675 "claimed": false, 00:20:30.675 "zoned": false, 00:20:30.675 "supported_io_types": { 00:20:30.675 "read": true, 00:20:30.675 "write": true, 00:20:30.675 "unmap": true, 00:20:30.675 "flush": false, 00:20:30.675 "reset": true, 00:20:30.675 "nvme_admin": false, 00:20:30.675 "nvme_io": false, 00:20:30.675 "nvme_io_md": false, 00:20:30.675 "write_zeroes": true, 00:20:30.675 "zcopy": false, 00:20:30.675 "get_zone_info": false, 00:20:30.675 "zone_management": false, 00:20:30.675 "zone_append": false, 00:20:30.675 "compare": false, 00:20:30.675 "compare_and_write": false, 00:20:30.675 "abort": false, 00:20:30.675 "seek_hole": true, 00:20:30.675 "seek_data": true, 00:20:30.675 "copy": false, 00:20:30.675 "nvme_iov_md": false 00:20:30.675 }, 00:20:30.675 "driver_specific": { 00:20:30.675 "lvol": { 00:20:30.675 "lvol_store_uuid": "69f59e20-c66b-4e82-9658-eb736aad7626", 00:20:30.675 "base_bdev": "aio_bdev", 00:20:30.675 "thin_provision": false, 00:20:30.675 "num_allocated_clusters": 38, 00:20:30.675 "snapshot": false, 00:20:30.675 "clone": false, 00:20:30.675 "esnap_clone": false 00:20:30.675 } 00:20:30.675 } 00:20:30.675 } 00:20:30.675 ] 00:20:30.675 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:30.675 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:30.675 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:30.932 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:30.932 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:30.932 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:31.189 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:31.189 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c62a1f3-4d60-44d0-814c-adee2c70ffb5 00:20:31.447 08:16:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69f59e20-c66b-4e82-9658-eb736aad7626 00:20:31.705 08:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:31.962 00:20:31.962 real 0m18.971s 00:20:31.962 user 0m48.099s 00:20:31.962 sys 0m4.776s 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:31.962 ************************************ 00:20:31.962 END TEST lvs_grow_dirty 00:20:31.962 ************************************ 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:31.962 nvmf_trace.0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.962 rmmod nvme_tcp 00:20:31.962 rmmod nvme_fabrics 00:20:31.962 rmmod nvme_keyring 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4104155 ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4104155 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4104155 ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4104155 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104155 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104155' 00:20:31.962 killing process with pid 4104155 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4104155 00:20:31.962 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4104155 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.219 08:16:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.745 08:16:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.745 00:20:34.745 real 0m41.791s 00:20:34.745 user 1m10.420s 00:20:34.745 sys 0m8.660s 00:20:34.745 08:16:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.745 08:16:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:34.745 ************************************ 00:20:34.745 END TEST nvmf_lvs_grow 00:20:34.745 ************************************ 00:20:34.745 08:16:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.745 08:16:43 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:34.745 08:16:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.745 08:16:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.745 08:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.745 ************************************ 00:20:34.745 START TEST nvmf_bdev_io_wait 00:20:34.745 ************************************ 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:34.745 * Looking for test storage... 00:20:34.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.745 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.746 08:16:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.644 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.644 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.644 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.644 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.644 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:20:36.645 00:20:36.645 --- 10.0.0.2 ping statistics --- 00:20:36.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.645 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:36.645 00:20:36.645 --- 10.0.0.1 ping statistics --- 00:20:36.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.645 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4106648 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4106648 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4106648 ']' 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.645 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.645 [2024-07-21 08:16:46.228231] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:36.645 [2024-07-21 08:16:46.228318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.645 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.902 [2024-07-21 08:16:46.299363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.902 [2024-07-21 08:16:46.388670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.902 [2024-07-21 08:16:46.388727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.902 [2024-07-21 08:16:46.388740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.902 [2024-07-21 08:16:46.388751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.902 [2024-07-21 08:16:46.388761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.902 [2024-07-21 08:16:46.388816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.902 [2024-07-21 08:16:46.388877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.902 [2024-07-21 08:16:46.388941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.902 [2024-07-21 08:16:46.388944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:36.902 [2024-07-21 08:16:46.519259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.902 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:37.160 Malloc0 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:37.160 [2024-07-21 08:16:46.587862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4106702 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4106704 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.160 { 00:20:37.160 "params": { 00:20:37.160 "name": "Nvme$subsystem", 00:20:37.160 "trtype": "$TEST_TRANSPORT", 00:20:37.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.160 "adrfam": "ipv4", 00:20:37.160 "trsvcid": "$NVMF_PORT", 00:20:37.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.160 "hdgst": ${hdgst:-false}, 00:20:37.160 "ddgst": ${ddgst:-false} 00:20:37.160 }, 00:20:37.160 "method": "bdev_nvme_attach_controller" 00:20:37.160 } 00:20:37.160 EOF 00:20:37.160 )") 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4106706 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.160 { 00:20:37.160 "params": { 00:20:37.160 "name": "Nvme$subsystem", 00:20:37.160 "trtype": "$TEST_TRANSPORT", 00:20:37.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.160 "adrfam": "ipv4", 00:20:37.160 "trsvcid": "$NVMF_PORT", 00:20:37.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.160 "hdgst": ${hdgst:-false}, 00:20:37.160 "ddgst": ${ddgst:-false} 00:20:37.160 }, 00:20:37.160 "method": "bdev_nvme_attach_controller" 00:20:37.160 } 00:20:37.160 EOF 00:20:37.160 )") 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4106709 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.160 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.161 { 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme$subsystem", 00:20:37.161 "trtype": "$TEST_TRANSPORT", 00:20:37.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "$NVMF_PORT", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.161 "hdgst": ${hdgst:-false}, 00:20:37.161 "ddgst": ${ddgst:-false} 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 } 00:20:37.161 EOF 00:20:37.161 )") 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.161 { 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme$subsystem", 00:20:37.161 "trtype": "$TEST_TRANSPORT", 00:20:37.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "$NVMF_PORT", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.161 "hdgst": ${hdgst:-false}, 00:20:37.161 "ddgst": ${ddgst:-false} 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 } 00:20:37.161 EOF 00:20:37.161 )") 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4106702 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme1", 00:20:37.161 "trtype": "tcp", 00:20:37.161 "traddr": "10.0.0.2", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "4420", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.161 "hdgst": false, 00:20:37.161 "ddgst": false 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 }' 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme1", 00:20:37.161 "trtype": "tcp", 00:20:37.161 "traddr": "10.0.0.2", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "4420", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.161 "hdgst": false, 00:20:37.161 "ddgst": false 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 }' 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme1", 00:20:37.161 "trtype": "tcp", 00:20:37.161 "traddr": "10.0.0.2", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "4420", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.161 "hdgst": false, 00:20:37.161 "ddgst": false 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 }' 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:37.161 08:16:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.161 "params": { 00:20:37.161 "name": "Nvme1", 00:20:37.161 "trtype": "tcp", 00:20:37.161 "traddr": "10.0.0.2", 00:20:37.161 "adrfam": "ipv4", 00:20:37.161 "trsvcid": "4420", 00:20:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.161 "hdgst": false, 00:20:37.161 "ddgst": false 00:20:37.161 }, 00:20:37.161 "method": "bdev_nvme_attach_controller" 00:20:37.161 }' 00:20:37.161 [2024-07-21 08:16:46.633658] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:37.161 [2024-07-21 08:16:46.633660] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:37.161 [2024-07-21 08:16:46.633751] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-21 08:16:46.633752] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:37.161 --proc-type=auto ] 00:20:37.161 [2024-07-21 08:16:46.633819] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:37.161 [2024-07-21 08:16:46.633818] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:37.161 [2024-07-21 08:16:46.633894] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-21 08:16:46.633894] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:37.161 --proc-type=auto ] 00:20:37.161 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.161 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.419 [2024-07-21 08:16:46.811160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.419 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.419 [2024-07-21 08:16:46.885761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.419 [2024-07-21 08:16:46.911994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.419 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.419 [2024-07-21 08:16:46.979402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.419 [2024-07-21 08:16:46.989942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.676 [2024-07-21 08:16:47.050589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.676 [2024-07-21 08:16:47.054093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.676 [2024-07-21 08:16:47.124741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:20:37.677 Running I/O for 1 seconds... 00:20:37.677 Running I/O for 1 seconds... 00:20:37.677 Running I/O for 1 seconds... 00:20:37.934 Running I/O for 1 seconds... 00:20:38.868 00:20:38.868 Latency(us) 00:20:38.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.868 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:38.868 Nvme1n1 : 1.01 8842.13 34.54 0.00 0.00 14408.31 8252.68 27379.48 00:20:38.868 =================================================================================================================== 00:20:38.868 Total : 8842.13 34.54 0.00 0.00 14408.31 8252.68 27379.48 00:20:38.868 00:20:38.868 Latency(us) 00:20:38.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.868 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:38.868 Nvme1n1 : 1.02 6431.94 25.12 0.00 0.00 19702.55 11456.66 29321.29 00:20:38.868 =================================================================================================================== 00:20:38.868 Total : 6431.94 25.12 0.00 0.00 19702.55 11456.66 29321.29 00:20:38.868 00:20:38.868 Latency(us) 00:20:38.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.868 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:38.868 Nvme1n1 : 1.00 172062.88 672.12 0.00 0.00 741.00 273.07 970.90 00:20:38.868 =================================================================================================================== 00:20:38.868 Total : 172062.88 672.12 0.00 0.00 741.00 273.07 970.90 00:20:38.868 00:20:38.869 Latency(us) 00:20:38.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.869 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:38.869 Nvme1n1 : 1.00 6892.56 26.92 0.00 0.00 18518.34 4805.97 39807.05 00:20:38.869 =================================================================================================================== 00:20:38.869 Total : 6892.56 26.92 0.00 0.00 18518.34 4805.97 39807.05 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4106704 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4106706 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4106709 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.126 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.127 rmmod nvme_tcp 00:20:39.127 rmmod nvme_fabrics 00:20:39.127 rmmod nvme_keyring 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4106648 ']' 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4106648 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4106648 ']' 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4106648 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4106648 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4106648' 00:20:39.127 killing process with pid 4106648 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4106648 00:20:39.127 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4106648 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.387 08:16:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.969 08:16:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.969 00:20:41.969 real 0m7.126s 00:20:41.969 user 0m15.819s 00:20:41.969 sys 0m3.527s 00:20:41.969 08:16:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.969 08:16:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:41.969 ************************************ 00:20:41.969 END TEST nvmf_bdev_io_wait 00:20:41.969 ************************************ 00:20:41.969 08:16:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:41.969 08:16:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:41.969 08:16:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:41.969 08:16:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.969 08:16:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.969 ************************************ 00:20:41.969 START TEST nvmf_queue_depth 00:20:41.969 ************************************ 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:41.969 * Looking for test storage... 00:20:41.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.969 08:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.357 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.615 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.615 08:16:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:20:43.615 00:20:43.615 --- 10.0.0.2 ping statistics --- 00:20:43.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.615 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:20:43.615 00:20:43.615 --- 10.0.0.1 ping statistics --- 00:20:43.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.615 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4108921 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4108921 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4108921 ']' 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.615 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 [2024-07-21 08:16:53.123556] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:43.615 [2024-07-21 08:16:53.123661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.615 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.615 [2024-07-21 08:16:53.186380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.873 [2024-07-21 08:16:53.271427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.873 [2024-07-21 08:16:53.271474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.873 [2024-07-21 08:16:53.271504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.873 [2024-07-21 08:16:53.271531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.873 [2024-07-21 08:16:53.271540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.873 [2024-07-21 08:16:53.271566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 [2024-07-21 08:16:53.408491] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 Malloc0 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.873 [2024-07-21 08:16:53.470289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4108946 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4108946 /var/tmp/bdevperf.sock 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4108946 ']' 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.873 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.131 [2024-07-21 08:16:53.516094] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:20:44.131 [2024-07-21 08:16:53.516159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108946 ] 00:20:44.131 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.131 [2024-07-21 08:16:53.576824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.131 [2024-07-21 08:16:53.667342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 NVMe0n1 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 08:16:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.389 Running I/O for 10 seconds... 00:20:56.581 00:20:56.581 Latency(us) 00:20:56.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.581 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:56.581 Verification LBA range: start 0x0 length 0x4000 00:20:56.581 NVMe0n1 : 10.11 8207.34 32.06 0.00 0.00 123746.88 22816.24 73788.68 00:20:56.581 =================================================================================================================== 00:20:56.581 Total : 8207.34 32.06 0.00 0.00 123746.88 22816.24 73788.68 00:20:56.581 0 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4108946 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4108946 ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4108946 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108946 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108946' 00:20:56.581 killing process with pid 4108946 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4108946 00:20:56.581 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.581 00:20:56.581 Latency(us) 00:20:56.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.581 =================================================================================================================== 00:20:56.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4108946 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.581 rmmod nvme_tcp 00:20:56.581 rmmod nvme_fabrics 00:20:56.581 rmmod nvme_keyring 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4108921 ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4108921 ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108921' 00:20:56.581 killing process with pid 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4108921 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.581 08:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.516 08:17:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.516 00:20:57.516 real 0m15.717s 00:20:57.516 user 0m22.328s 00:20:57.516 sys 0m2.917s 00:20:57.516 08:17:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.516 08:17:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:57.516 ************************************ 00:20:57.516 END TEST nvmf_queue_depth 00:20:57.516 ************************************ 00:20:57.516 08:17:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:57.516 08:17:06 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:57.516 08:17:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:57.516 08:17:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.516 08:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.516 ************************************ 00:20:57.516 START TEST nvmf_target_multipath 00:20:57.516 ************************************ 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:57.516 * Looking for test storage... 00:20:57.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.516 08:17:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.517 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.517 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.517 08:17:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.517 08:17:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:59.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.414 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:59.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:59.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:59.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:20:59.415 00:20:59.415 --- 10.0.0.2 ping statistics --- 00:20:59.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.415 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:59.415 00:20:59.415 --- 10.0.0.1 ping statistics --- 00:20:59.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.415 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:59.415 only one NIC for nvmf test 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.415 08:17:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.415 rmmod nvme_tcp 00:20:59.415 rmmod nvme_fabrics 00:20:59.415 rmmod nvme_keyring 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.673 08:17:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.574 00:21:01.574 real 0m4.287s 00:21:01.574 user 0m0.814s 00:21:01.574 sys 0m1.442s 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.574 08:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:01.574 ************************************ 00:21:01.574 END TEST nvmf_target_multipath 00:21:01.574 ************************************ 00:21:01.574 08:17:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:01.574 08:17:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:01.574 08:17:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:01.574 08:17:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.574 08:17:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.574 ************************************ 00:21:01.574 START TEST nvmf_zcopy 00:21:01.574 ************************************ 00:21:01.574 08:17:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:01.831 * Looking for test storage... 00:21:01.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.831 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.832 08:17:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.728 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:03.729 00:21:03.729 --- 10.0.0.2 ping statistics --- 00:21:03.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.729 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:21:03.729 00:21:03.729 --- 10.0.0.1 ping statistics --- 00:21:03.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.729 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4113992 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4113992 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4113992 ']' 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.729 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.729 [2024-07-21 08:17:13.307733] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:21:03.729 [2024-07-21 08:17:13.307813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.986 [2024-07-21 08:17:13.373716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.986 [2024-07-21 08:17:13.460677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.986 [2024-07-21 08:17:13.460733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.986 [2024-07-21 08:17:13.460747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.986 [2024-07-21 08:17:13.460758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.986 [2024-07-21 08:17:13.460767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.986 [2024-07-21 08:17:13.460794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.986 [2024-07-21 08:17:13.605723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.986 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 [2024-07-21 08:17:13.621939] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 malloc0 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.243 { 00:21:04.243 "params": { 00:21:04.243 "name": "Nvme$subsystem", 00:21:04.243 "trtype": "$TEST_TRANSPORT", 00:21:04.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.243 "adrfam": "ipv4", 00:21:04.243 "trsvcid": "$NVMF_PORT", 00:21:04.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.243 "hdgst": ${hdgst:-false}, 00:21:04.243 "ddgst": ${ddgst:-false} 00:21:04.243 }, 00:21:04.243 "method": "bdev_nvme_attach_controller" 00:21:04.243 } 00:21:04.243 EOF 00:21:04.243 )") 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:04.243 08:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:04.243 "params": { 00:21:04.243 "name": "Nvme1", 00:21:04.243 "trtype": "tcp", 00:21:04.243 "traddr": "10.0.0.2", 00:21:04.243 "adrfam": "ipv4", 00:21:04.243 "trsvcid": "4420", 00:21:04.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.243 "hdgst": false, 00:21:04.243 "ddgst": false 00:21:04.243 }, 00:21:04.243 "method": "bdev_nvme_attach_controller" 00:21:04.243 }' 00:21:04.243 [2024-07-21 08:17:13.705393] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:21:04.243 [2024-07-21 08:17:13.705471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114130 ] 00:21:04.243 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.243 [2024-07-21 08:17:13.773355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.243 [2024-07-21 08:17:13.867517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.505 Running I/O for 10 seconds... 00:21:14.520 00:21:14.520 Latency(us) 00:21:14.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:14.521 Verification LBA range: start 0x0 length 0x1000 00:21:14.521 Nvme1n1 : 10.02 5363.38 41.90 0.00 0.00 23803.96 2682.12 32234.00 00:21:14.521 =================================================================================================================== 00:21:14.521 Total : 5363.38 41.90 0.00 0.00 23803.96 2682.12 32234.00 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4115335 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.778 { 00:21:14.778 "params": { 00:21:14.778 "name": "Nvme$subsystem", 00:21:14.778 "trtype": "$TEST_TRANSPORT", 00:21:14.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.778 "adrfam": "ipv4", 00:21:14.778 "trsvcid": "$NVMF_PORT", 00:21:14.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.778 "hdgst": ${hdgst:-false}, 00:21:14.778 "ddgst": ${ddgst:-false} 00:21:14.778 }, 00:21:14.778 "method": "bdev_nvme_attach_controller" 00:21:14.778 } 00:21:14.778 EOF 00:21:14.778 )") 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:14.778 [2024-07-21 08:17:24.331299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.778 [2024-07-21 08:17:24.331344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:14.778 08:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:14.778 "params": { 00:21:14.778 "name": "Nvme1", 00:21:14.778 "trtype": "tcp", 00:21:14.778 "traddr": "10.0.0.2", 00:21:14.778 "adrfam": "ipv4", 00:21:14.778 "trsvcid": "4420", 00:21:14.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.779 "hdgst": false, 00:21:14.779 "ddgst": false 00:21:14.779 }, 00:21:14.779 "method": "bdev_nvme_attach_controller" 00:21:14.779 }' 00:21:14.779 [2024-07-21 08:17:24.339269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.339297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.347285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.347315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.355299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.355321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.363316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.363337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.367986] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:21:14.779 [2024-07-21 08:17:24.368061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115335 ] 00:21:14.779 [2024-07-21 08:17:24.371337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.371358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.379355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.379374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.387380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.387400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 [2024-07-21 08:17:24.395397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.395416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.779 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.779 [2024-07-21 08:17:24.403436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.779 [2024-07-21 08:17:24.403460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.411460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.411484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.419480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.419504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.427504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.427529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.431478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.035 [2024-07-21 08:17:24.435532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.435559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.443575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.443621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.451572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.451599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.459592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.035 [2024-07-21 08:17:24.459626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.035 [2024-07-21 08:17:24.467620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.467659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.475642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.475688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.483685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.483710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.491738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.491775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.499714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.499736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.507731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.507752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.515745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.515767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.523766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.523789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.528340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.036 [2024-07-21 08:17:24.531787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.531809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.539807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.539829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.547859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.547907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.555882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.555932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.563922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.563974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.571944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.571985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.579970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.580010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.587992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.588033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.595996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.596022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.604029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.604067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.612046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.612086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.620081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.620119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.628080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.628105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.636102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.636127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.644147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.644177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.652159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.652187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.036 [2024-07-21 08:17:24.660183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.036 [2024-07-21 08:17:24.660210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.668207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.668235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.676229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.676256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.684254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.684281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.692272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.692298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.740874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.740903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.748437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.748466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 Running I/O for 5 seconds... 00:21:15.293 [2024-07-21 08:17:24.756459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.756485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.771582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.771621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.782910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.782939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.794168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.794197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.805547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.805577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.816636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.816679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.827508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.827537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.840764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.840792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.850969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.850998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.861903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.861931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.874203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.874232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.883774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.883818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.895199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.895227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.906029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.906058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.293 [2024-07-21 08:17:24.916873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.293 [2024-07-21 08:17:24.916902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.927697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.927726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.938296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.938324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.949692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.949721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.961523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.961552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.973122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.973152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.984274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.984303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:24.997045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:24.997074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.007682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.007711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.018851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.018879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.029862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.029889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.040938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.040966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.053586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.053631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.064129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.064157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.075163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.075191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.087976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.088003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.098637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.098664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.109635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.109663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.122603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.122640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.133056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.133084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.143954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.143982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.155004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.155032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.165849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.165877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.551 [2024-07-21 08:17:25.178462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.551 [2024-07-21 08:17:25.178489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.188889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.188917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.200339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.200367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.213441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.213469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.223835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.223864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.234838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.234866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.245857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.245884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.257269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.257297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.268715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.268750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.280101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.280129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.291087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.291115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.301714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.301742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.312591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.312627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.323985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.324016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.337044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.337086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.347677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.347707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.358692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.358720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.371833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.371864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.382485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.382513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.394016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.394048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.407118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.407146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.417739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.417770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.808 [2024-07-21 08:17:25.429034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.808 [2024-07-21 08:17:25.429063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.439841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.439879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.450507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.450537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.461807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.461835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.473335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.473363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.483999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.484034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.495147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.495176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.506079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.506107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.516947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.516975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.529913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.529941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.540094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.540121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.550983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.551011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.564007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.564035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.574325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.574354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.585374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.585402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.596098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.596127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.607191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.607219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.620515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.620544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.630862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.630890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.641710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.641737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.652612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.652649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.663745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.663773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.676570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.676598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.066 [2024-07-21 08:17:25.687236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.066 [2024-07-21 08:17:25.687263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.698363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.698399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.711328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.711356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.721804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.721831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.732697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.732725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.743554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.743582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.754381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.754408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.767195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.767222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.777712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.777740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.788521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.788549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.800935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.800963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.810560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.810587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.822710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.822737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.833736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.833764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.846623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.846651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.856821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.856849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.867318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.867345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.878271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.878299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.889427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.889454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.900909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.900936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.911752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.911780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.922859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.922887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.933710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.933738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.323 [2024-07-21 08:17:25.944418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.323 [2024-07-21 08:17:25.944445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:25.955123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:25.955151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:25.966014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:25.966041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:25.976253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:25.976281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:25.987117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:25.987146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:25.998269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:25.998298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.009365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.009393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.022489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.022518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.033391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.033420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.043987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.044016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.054836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.054865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.065841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.065869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.077106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.077135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.088379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.088407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.101452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.101481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.111645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.111681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.122635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.122666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.135846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.135873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.146546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.146577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.157944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.157972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.169276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.169305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.180117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.180160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.192556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.192584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.581 [2024-07-21 08:17:26.202975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.581 [2024-07-21 08:17:26.203003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.213679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.213708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.224790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.224818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.237859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.237887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.248041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.248068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.258922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.258951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.269977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.270005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.280972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.281000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.293659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.293686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.303140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.303169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.314871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.314899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.325956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.325984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.337471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.337499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.348318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.348348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.359121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.359150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.372035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.372064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.384268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.384296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.393880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.393908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.406222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.406251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.417430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.417458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.428501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.428529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.439277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.439305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.450025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.450053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.839 [2024-07-21 08:17:26.460797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.839 [2024-07-21 08:17:26.460825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.471669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.471697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.483092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.483120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.493956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.493984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.504837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.504865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.515894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.515922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.527092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.527133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.539917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.539945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.550195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.550223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.561000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.561028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.572273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.572301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.582863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.582891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.594020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.594048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.605384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.605412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.616697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.616724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.629854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.629883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.640372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.640400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.651241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.651269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.662530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.662559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.673488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.673516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.686244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.686287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.698163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.698191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.707717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.707745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.097 [2024-07-21 08:17:26.719676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.097 [2024-07-21 08:17:26.719704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.731028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.731055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.742507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.742535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.753698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.753734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.764693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.764721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.775659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.775687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.788851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.788879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.799190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.799218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.810304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.810332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.823074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.823119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.833950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.833979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.845071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.845099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.857739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.857768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.867949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.867977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.878899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.878927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.890029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.890058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.901035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.901063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.912351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.912379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.923214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.923243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.934146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.934174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.945067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.945099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.956034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.956062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.969049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.969085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.355 [2024-07-21 08:17:26.979685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.355 [2024-07-21 08:17:26.979713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:26.990655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:26.990682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.001722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.001758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.013173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.013201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.024514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.024543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.035152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.035181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.046207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.046235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.059250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.059279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.069898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.069925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.081106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.081133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.092128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.092155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.102543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.102572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.113169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.113197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.123622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.123662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.134582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.134611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.144740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.144769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.154906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.154935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.165455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.165484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.175642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.175678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.186590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.186627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.199157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.199185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.209110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.209138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.219624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.219652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.230263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.230291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.613 [2024-07-21 08:17:27.241112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.613 [2024-07-21 08:17:27.241143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.251820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.251847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.262676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.262704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.274938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.274966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.284352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.284380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.295433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.295461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.306375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.306403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.316858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.316887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.327482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.327510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.338392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.338419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.351089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.351118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.361665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.361693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.372177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.372206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.871 [2024-07-21 08:17:27.382950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.871 [2024-07-21 08:17:27.382983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.394161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.394189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.406236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.406265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.416467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.416496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.427162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.427190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.439656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.439694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.450206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.450235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.460679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.460707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.471029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.471056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.481045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.481073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.872 [2024-07-21 08:17:27.491485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.872 [2024-07-21 08:17:27.491513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.501799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.501828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.512421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.512448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.524446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.524474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.534277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.534305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.545595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.545630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.555714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.555742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.566142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.566170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.576967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.576995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.587904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.587933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.600432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.600460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.610753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.610781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.621376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.621405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.633736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.633763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.644028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.644057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.654707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.654735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.665664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.665691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.676162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.676190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.687384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.687413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.698035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.698064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.709102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.709130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.720243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.720271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.733224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.733252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.743730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.743758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.130 [2024-07-21 08:17:27.754785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.130 [2024-07-21 08:17:27.754813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.765673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.765701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.776198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.776225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.787265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.787293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.798523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.798551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.809401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.809430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.820731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.820759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.831727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.831755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.843090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.843117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.853912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.853940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.866723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.866752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.877117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.877146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.888005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.888033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.898722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.898751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.909844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.909873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.920862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.920890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.931681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.931710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.942938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.942966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.953539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.953568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.964753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.964784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.977735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.977764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.988237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.988265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:27.999196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:27.999224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.388 [2024-07-21 08:17:28.010510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.388 [2024-07-21 08:17:28.010538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.021514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.021546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.032772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.032800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.043865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.043893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.055283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.055312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.066666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.066694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.645 [2024-07-21 08:17:28.078066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.645 [2024-07-21 08:17:28.078110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.089479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.089510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.100493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.100521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.111938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.111966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.122885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.122913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.134274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.134302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.147198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.147226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.157026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.157055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.167910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.167938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.178580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.178608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.189771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.189799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.201226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.201255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.212018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.212046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.224830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.224859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.235078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.235106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.246692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.246720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.257101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.257130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.646 [2024-07-21 08:17:28.268343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.646 [2024-07-21 08:17:28.268372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.281361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.903 [2024-07-21 08:17:28.281390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.291660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.903 [2024-07-21 08:17:28.291689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.302414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.903 [2024-07-21 08:17:28.302442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.315413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.903 [2024-07-21 08:17:28.315441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.325898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.903 [2024-07-21 08:17:28.325927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.903 [2024-07-21 08:17:28.336782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.336811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.347415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.347443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.358277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.358306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.369023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.369051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.380000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.380028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.391217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.391245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.402428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.402456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.415143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.415172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.425744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.425779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.437034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.437061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.447935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.447964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.459256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.459284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.472466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.472495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.482908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.482937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.493832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.493861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.504557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.504589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.515264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.515292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.904 [2024-07-21 08:17:28.526060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.904 [2024-07-21 08:17:28.526088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.536938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.536966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.547888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.547931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.560710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.560739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.571407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.571436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.582755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.582782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.593929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.593956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.605212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.605241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.616382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.616411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.627666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.627694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.640515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.640551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.652382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.652410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.661737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.661765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.673445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.673472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.684442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.684470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.695182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.695211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.706144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.706173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.717009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.717037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.729241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.729270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.739065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.739093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.749607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.749645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.759888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.759916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.770339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.770366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.161 [2024-07-21 08:17:28.780970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.161 [2024-07-21 08:17:28.780998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.791470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.791498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.802107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.802135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.812755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.812783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.823297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.823326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.833860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.833888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.844859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.844892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.857419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.857446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.867930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.867958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.878821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.878850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.890970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.890998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.901165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.901193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.911886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.911914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.924631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.924659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.934457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.934485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.945326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.945354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.956139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.956167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.967020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.967048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.979570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.979598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:28.991268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:28.991296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:29.000708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:29.000736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:29.011384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:29.011412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:29.021859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:29.021887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:29.032371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:29.032398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.419 [2024-07-21 08:17:29.042988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.419 [2024-07-21 08:17:29.043016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.052787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.052822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.063311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.063339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.073687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.073714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.084446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.084474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.094498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.094525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.104623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.104651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.114960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.114989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.125293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.125320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.135812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.135840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.146358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.146386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.156678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.156707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.166806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.166834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.177713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.177741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.189373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.189401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.199092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.199120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.210349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.210377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.222965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.222993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.233150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.233178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.248031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.248061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.258173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.258218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.268443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.268471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.278819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.278847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.289071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.289100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.677 [2024-07-21 08:17:29.299381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.677 [2024-07-21 08:17:29.299408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.310033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.310062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.320666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.320694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.331595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.331630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.341884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.341911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.353435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.353463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.364521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.364549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.377201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.377229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.387101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.387130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.398736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.398764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.410023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.410067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.421356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.421385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.432445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.432473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.445852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.445894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.456340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.456369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.467233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.467262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.480169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.480198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.935 [2024-07-21 08:17:29.490139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.935 [2024-07-21 08:17:29.490167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.502018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.502046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.513313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.513341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.524863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.524891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.536272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.536301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.547893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.547921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.936 [2024-07-21 08:17:29.559068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.936 [2024-07-21 08:17:29.559096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.571610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.571645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.581444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.581473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.593195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.593223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.605773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.605800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.615514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.615542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.626851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.626879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.637684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.637712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.649000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.649028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.661738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.661767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.671999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.672027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.682784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.682812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.695778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.695807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.706633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.706661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.717458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.717487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.730279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.730308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.740662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.740691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.751174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.751219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.761974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.762002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.773008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.773036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 00:21:20.194 Latency(us) 00:21:20.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.194 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:20.194 Nvme1n1 : 5.01 11638.64 90.93 0.00 0.00 10982.80 4878.79 24563.86 00:21:20.194 =================================================================================================================== 00:21:20.194 Total : 11638.64 90.93 0.00 0.00 10982.80 4878.79 24563.86 00:21:20.194 [2024-07-21 08:17:29.778532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.778561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.786546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.786572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.794580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.794619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.802640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.802686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.810679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.810728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.194 [2024-07-21 08:17:29.818680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.194 [2024-07-21 08:17:29.818726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.826707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.826751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.834725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.834773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.842741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.842790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.850757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.850803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.858796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.858845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.866818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.866865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.874840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.874890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.882854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.882902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.890875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.890922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.898895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.898941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.906925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.906973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.914936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.914983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.922939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.922966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.930956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.930988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.939013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.939062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.947025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.947071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.955039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.955078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.963043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.963068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.971095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.971146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.979123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.979186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.987117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.987151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:29.995123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:29.995149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:30.003154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:30.003186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 [2024-07-21 08:17:30.011171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:20.453 [2024-07-21 08:17:30.011205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:20.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4115335) - No such process 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4115335 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:20.453 delay0 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.453 08:17:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:20.453 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.711 [2024-07-21 08:17:30.124610] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:27.260 Initializing NVMe Controllers 00:21:27.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.260 Initialization complete. Launching workers. 00:21:27.260 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 140 00:21:27.260 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 427, failed to submit 33 00:21:27.260 success 271, unsuccess 156, failed 0 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.260 rmmod nvme_tcp 00:21:27.260 rmmod nvme_fabrics 00:21:27.260 rmmod nvme_keyring 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4113992 ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4113992 ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4113992' 00:21:27.260 killing process with pid 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4113992 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.260 08:17:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.194 08:17:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.194 00:21:29.194 real 0m27.547s 00:21:29.194 user 0m40.693s 00:21:29.194 sys 0m8.278s 00:21:29.194 08:17:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.194 08:17:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:29.194 ************************************ 00:21:29.194 END TEST nvmf_zcopy 00:21:29.194 ************************************ 00:21:29.194 08:17:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:29.194 08:17:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:29.194 08:17:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:29.194 08:17:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.194 08:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.194 ************************************ 00:21:29.194 START TEST nvmf_nmic 00:21:29.194 ************************************ 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:29.194 * Looking for test storage... 00:21:29.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.194 08:17:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:21:29.452 08:17:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.348 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:31.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:31.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:31.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:31.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.349 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:21:31.607 00:21:31.607 --- 10.0.0.2 ping statistics --- 00:21:31.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.607 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:21:31.607 00:21:31.607 --- 10.0.0.1 ping statistics --- 00:21:31.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.607 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.607 08:17:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4118716 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4118716 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4118716 ']' 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.607 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.608 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.608 [2024-07-21 08:17:41.066293] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:21:31.608 [2024-07-21 08:17:41.066390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.608 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.608 [2024-07-21 08:17:41.134819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.608 [2024-07-21 08:17:41.229501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.608 [2024-07-21 08:17:41.229562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.608 [2024-07-21 08:17:41.229592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.608 [2024-07-21 08:17:41.229605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.608 [2024-07-21 08:17:41.229629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.608 [2024-07-21 08:17:41.229701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.608 [2024-07-21 08:17:41.229757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.608 [2024-07-21 08:17:41.229812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.608 [2024-07-21 08:17:41.229814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 [2024-07-21 08:17:41.400723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 Malloc0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 [2024-07-21 08:17:41.454127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:31.866 test case1: single bdev can't be used in multiple subsystems 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 [2024-07-21 08:17:41.477971] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:31.866 [2024-07-21 08:17:41.478000] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:31.866 [2024-07-21 08:17:41.478015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.866 request: 00:21:31.866 { 00:21:31.866 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.866 "namespace": { 00:21:31.866 "bdev_name": "Malloc0", 00:21:31.866 "no_auto_visible": false 00:21:31.866 }, 00:21:31.866 "method": "nvmf_subsystem_add_ns", 00:21:31.866 "req_id": 1 00:21:31.866 } 00:21:31.866 Got JSON-RPC error response 00:21:31.866 response: 00:21:31.866 { 00:21:31.866 "code": -32602, 00:21:31.866 "message": "Invalid parameters" 00:21:31.866 } 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:31.866 Adding namespace failed - expected result. 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:31.866 test case2: host connect to nvmf target in multiple paths 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:31.866 [2024-07-21 08:17:41.486101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.866 08:17:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:32.797 08:17:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:33.362 08:17:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:33.362 08:17:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:21:33.362 08:17:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.362 08:17:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:33.362 08:17:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:21:35.256 08:17:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:35.513 [global] 00:21:35.513 thread=1 00:21:35.513 invalidate=1 00:21:35.513 rw=write 00:21:35.513 time_based=1 00:21:35.513 runtime=1 00:21:35.513 ioengine=libaio 00:21:35.513 direct=1 00:21:35.513 bs=4096 00:21:35.513 iodepth=1 00:21:35.513 norandommap=0 00:21:35.513 numjobs=1 00:21:35.513 00:21:35.513 verify_dump=1 00:21:35.513 verify_backlog=512 00:21:35.513 verify_state_save=0 00:21:35.513 do_verify=1 00:21:35.513 verify=crc32c-intel 00:21:35.513 [job0] 00:21:35.513 filename=/dev/nvme0n1 00:21:35.513 Could not set queue depth (nvme0n1) 00:21:35.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:35.513 fio-3.35 00:21:35.513 Starting 1 thread 00:21:36.882 00:21:36.882 job0: (groupid=0, jobs=1): err= 0: pid=4119231: Sun Jul 21 08:17:46 2024 00:21:36.882 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(8004KiB/1001msec) 00:21:36.882 slat (nsec): min=4720, max=30083, avg=8712.08, stdev=3841.23 00:21:36.882 clat (usec): min=222, max=340, avg=275.49, stdev=25.94 00:21:36.882 lat (usec): min=230, max=356, avg=284.20, stdev=26.88 00:21:36.882 clat percentiles (usec): 00:21:36.882 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:21:36.882 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:21:36.882 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 318], 00:21:36.882 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 338], 00:21:36.882 | 99.99th=[ 343] 00:21:36.882 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:36.882 slat (usec): min=6, max=28064, avg=21.39, stdev=619.96 00:21:36.882 clat (usec): min=140, max=449, avg=184.56, stdev=27.49 00:21:36.882 lat (usec): min=147, max=28314, avg=205.95, stdev=622.06 00:21:36.882 clat percentiles (usec): 00:21:36.882 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:21:36.882 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:21:36.882 | 70.00th=[ 188], 80.00th=[ 206], 90.00th=[ 223], 95.00th=[ 231], 00:21:36.882 | 99.00th=[ 269], 99.50th=[ 318], 99.90th=[ 429], 99.95th=[ 433], 00:21:36.882 | 99.99th=[ 449] 00:21:36.882 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:21:36.882 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:36.882 lat (usec) : 250=58.21%, 500=41.79% 00:21:36.882 cpu : usr=2.20%, sys=3.00%, ctx=4051, majf=0, minf=2 00:21:36.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.882 issued rwts: total=2001,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.882 00:21:36.882 Run status group 0 (all jobs): 00:21:36.882 READ: bw=7996KiB/s (8188kB/s), 7996KiB/s-7996KiB/s (8188kB/s-8188kB/s), io=8004KiB (8196kB), run=1001-1001msec 00:21:36.882 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:21:36.882 00:21:36.882 Disk stats (read/write): 00:21:36.882 nvme0n1: ios=1689/2048, merge=0/0, ticks=1434/374, in_queue=1808, util=98.50% 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:36.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.882 rmmod nvme_tcp 00:21:36.882 rmmod nvme_fabrics 00:21:36.882 rmmod nvme_keyring 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4118716 ']' 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4118716 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4118716 ']' 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4118716 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118716 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118716' 00:21:36.882 killing process with pid 4118716 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4118716 00:21:36.882 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4118716 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.139 08:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.670 08:17:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.670 00:21:39.670 real 0m9.983s 00:21:39.670 user 0m22.687s 00:21:39.670 sys 0m2.349s 00:21:39.670 08:17:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.670 08:17:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:39.670 ************************************ 00:21:39.670 END TEST nvmf_nmic 00:21:39.670 ************************************ 00:21:39.670 08:17:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:39.670 08:17:48 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:39.670 08:17:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:39.670 08:17:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.670 08:17:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.670 ************************************ 00:21:39.670 START TEST nvmf_fio_target 00:21:39.670 ************************************ 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:39.670 * Looking for test storage... 00:21:39.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.670 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.671 08:17:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:41.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:41.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.583 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:41.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:41.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:21:41.584 00:21:41.584 --- 10.0.0.2 ping statistics --- 00:21:41.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.584 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:21:41.584 00:21:41.584 --- 10.0.0.1 ping statistics --- 00:21:41.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.584 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4121299 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4121299 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4121299 ']' 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.584 08:17:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.584 [2024-07-21 08:17:51.024498] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:21:41.584 [2024-07-21 08:17:51.024583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.584 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.584 [2024-07-21 08:17:51.104621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.584 [2024-07-21 08:17:51.202675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.584 [2024-07-21 08:17:51.202739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.584 [2024-07-21 08:17:51.202756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.584 [2024-07-21 08:17:51.202770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.584 [2024-07-21 08:17:51.202782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.584 [2024-07-21 08:17:51.202841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.584 [2024-07-21 08:17:51.202887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.584 [2024-07-21 08:17:51.202943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.584 [2024-07-21 08:17:51.202946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.840 08:17:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.841 08:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:42.096 [2024-07-21 08:17:51.609417] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.096 08:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:42.352 08:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:42.352 08:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:42.608 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:42.608 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:42.865 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:42.865 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:43.121 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:43.121 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:43.378 08:17:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:43.635 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:43.635 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:43.892 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:43.892 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:44.149 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:44.149 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:44.408 08:17:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:44.698 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:44.698 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.954 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:44.954 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:45.211 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.467 [2024-07-21 08:17:54.935419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.467 08:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:45.723 08:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:45.980 08:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:21:46.543 08:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:21:48.449 08:17:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:48.449 [global] 00:21:48.449 thread=1 00:21:48.449 invalidate=1 00:21:48.449 rw=write 00:21:48.449 time_based=1 00:21:48.449 runtime=1 00:21:48.449 ioengine=libaio 00:21:48.449 direct=1 00:21:48.449 bs=4096 00:21:48.449 iodepth=1 00:21:48.449 norandommap=0 00:21:48.449 numjobs=1 00:21:48.449 00:21:48.449 verify_dump=1 00:21:48.449 verify_backlog=512 00:21:48.449 verify_state_save=0 00:21:48.449 do_verify=1 00:21:48.449 verify=crc32c-intel 00:21:48.449 [job0] 00:21:48.449 filename=/dev/nvme0n1 00:21:48.449 [job1] 00:21:48.449 filename=/dev/nvme0n2 00:21:48.449 [job2] 00:21:48.449 filename=/dev/nvme0n3 00:21:48.449 [job3] 00:21:48.449 filename=/dev/nvme0n4 00:21:48.706 Could not set queue depth (nvme0n1) 00:21:48.706 Could not set queue depth (nvme0n2) 00:21:48.706 Could not set queue depth (nvme0n3) 00:21:48.706 Could not set queue depth (nvme0n4) 00:21:48.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.706 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.706 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.706 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.706 fio-3.35 00:21:48.706 Starting 4 threads 00:21:50.081 00:21:50.081 job0: (groupid=0, jobs=1): err= 0: pid=4122368: Sun Jul 21 08:17:59 2024 00:21:50.081 read: IOPS=664, BW=2657KiB/s (2721kB/s)(2660KiB/1001msec) 00:21:50.081 slat (nsec): min=7246, max=54433, avg=19889.05, stdev=8112.91 00:21:50.081 clat (usec): min=226, max=41353, avg=1139.97, stdev=5775.12 00:21:50.081 lat (usec): min=236, max=41360, avg=1159.86, stdev=5775.06 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 237], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:21:50.081 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:21:50.081 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 416], 95.00th=[ 445], 00:21:50.081 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:50.081 | 99.99th=[41157] 00:21:50.081 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:21:50.081 slat (nsec): min=7552, max=46290, avg=16369.85, stdev=4901.09 00:21:50.081 clat (usec): min=145, max=388, avg=199.33, stdev=40.45 00:21:50.081 lat (usec): min=155, max=425, avg=215.70, stdev=41.40 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:21:50.081 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 204], 00:21:50.081 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 269], 00:21:50.081 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 375], 99.95th=[ 388], 00:21:50.081 | 99.99th=[ 388] 00:21:50.081 bw ( KiB/s): min= 8192, max= 8192, per=38.95%, avg=8192.00, stdev= 0.00, samples=1 00:21:50.081 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:50.081 lat (usec) : 250=62.76%, 500=36.18%, 750=0.24% 00:21:50.081 lat (msec) : 50=0.83% 00:21:50.081 cpu : usr=1.60%, sys=3.20%, ctx=1689, majf=0, minf=1 00:21:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 issued rwts: total=665,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.081 job1: (groupid=0, jobs=1): err= 0: pid=4122369: Sun Jul 21 08:17:59 2024 00:21:50.081 read: IOPS=1776, BW=7105KiB/s (7275kB/s)(7112KiB/1001msec) 00:21:50.081 slat (nsec): min=7276, max=51907, avg=14975.20, stdev=5468.29 00:21:50.081 clat (usec): min=216, max=429, avg=274.46, stdev=32.69 00:21:50.081 lat (usec): min=224, max=451, avg=289.43, stdev=35.85 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 249], 00:21:50.081 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:21:50.081 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 343], 00:21:50.081 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 429], 00:21:50.081 | 99.99th=[ 429] 00:21:50.081 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:50.081 slat (nsec): min=9285, max=55555, avg=20998.79, stdev=6653.09 00:21:50.081 clat (usec): min=147, max=470, avg=206.67, stdev=30.94 00:21:50.081 lat (usec): min=158, max=494, avg=227.66, stdev=33.81 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 186], 00:21:50.081 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:21:50.081 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 269], 00:21:50.081 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 371], 00:21:50.081 | 99.99th=[ 469] 00:21:50.081 bw ( KiB/s): min= 8192, max= 8192, per=38.95%, avg=8192.00, stdev= 0.00, samples=1 00:21:50.081 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:50.081 lat (usec) : 250=58.91%, 500=41.09% 00:21:50.081 cpu : usr=4.90%, sys=9.40%, ctx=3827, majf=0, minf=1 00:21:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 issued rwts: total=1778,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.081 job2: (groupid=0, jobs=1): err= 0: pid=4122370: Sun Jul 21 08:17:59 2024 00:21:50.081 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:50.081 slat (nsec): min=6833, max=53120, avg=16439.78, stdev=7271.31 00:21:50.081 clat (usec): min=226, max=41311, avg=1631.13, stdev=7290.99 00:21:50.081 lat (usec): min=237, max=41329, avg=1647.57, stdev=7292.06 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 253], 00:21:50.081 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 285], 00:21:50.081 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 338], 95.00th=[ 433], 00:21:50.081 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:50.081 | 99.99th=[41157] 00:21:50.081 write: IOPS=673, BW=2693KiB/s (2758kB/s)(2696KiB/1001msec); 0 zone resets 00:21:50.081 slat (nsec): min=9414, max=49376, avg=18459.80, stdev=5103.87 00:21:50.081 clat (usec): min=168, max=2034, avg=206.07, stdev=91.00 00:21:50.081 lat (usec): min=179, max=2045, avg=224.53, stdev=91.21 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:21:50.081 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:21:50.081 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 247], 00:21:50.081 | 99.00th=[ 383], 99.50th=[ 840], 99.90th=[ 2040], 99.95th=[ 2040], 00:21:50.081 | 99.99th=[ 2040] 00:21:50.081 bw ( KiB/s): min= 4096, max= 4096, per=19.48%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.081 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.081 lat (usec) : 250=60.62%, 500=37.10%, 750=0.51%, 1000=0.25% 00:21:50.081 lat (msec) : 4=0.08%, 50=1.43% 00:21:50.081 cpu : usr=1.10%, sys=2.10%, ctx=1186, majf=0, minf=1 00:21:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.081 issued rwts: total=512,674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.081 job3: (groupid=0, jobs=1): err= 0: pid=4122371: Sun Jul 21 08:17:59 2024 00:21:50.081 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:21:50.081 slat (nsec): min=6194, max=56054, avg=17446.15, stdev=8896.43 00:21:50.081 clat (usec): min=222, max=41012, avg=613.77, stdev=3399.55 00:21:50.081 lat (usec): min=229, max=41032, avg=631.22, stdev=3400.55 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:21:50.081 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 306], 00:21:50.081 | 70.00th=[ 359], 80.00th=[ 408], 90.00th=[ 461], 95.00th=[ 478], 00:21:50.081 | 99.00th=[ 519], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:50.081 | 99.99th=[41157] 00:21:50.081 write: IOPS=1515, BW=6062KiB/s (6207kB/s)(6068KiB/1001msec); 0 zone resets 00:21:50.081 slat (nsec): min=7737, max=48557, avg=16602.16, stdev=4934.44 00:21:50.081 clat (usec): min=154, max=375, avg=208.77, stdev=29.99 00:21:50.081 lat (usec): min=162, max=418, avg=225.38, stdev=31.69 00:21:50.081 clat percentiles (usec): 00:21:50.081 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:21:50.081 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 208], 60.00th=[ 221], 00:21:50.081 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 255], 00:21:50.081 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 371], 99.95th=[ 375], 00:21:50.081 | 99.99th=[ 375] 00:21:50.081 bw ( KiB/s): min= 4096, max= 4096, per=19.48%, avg=4096.00, stdev= 0.00, samples=1 00:21:50.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:50.082 lat (usec) : 250=65.41%, 500=33.84%, 750=0.43% 00:21:50.082 lat (msec) : 20=0.04%, 50=0.28% 00:21:50.082 cpu : usr=2.70%, sys=4.10%, ctx=2541, majf=0, minf=2 00:21:50.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:50.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.082 issued rwts: total=1024,1517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:50.082 00:21:50.082 Run status group 0 (all jobs): 00:21:50.082 READ: bw=15.5MiB/s (16.3MB/s), 2046KiB/s-7105KiB/s (2095kB/s-7275kB/s), io=15.5MiB (16.3MB), run=1001-1001msec 00:21:50.082 WRITE: bw=20.5MiB/s (21.5MB/s), 2693KiB/s-8184KiB/s (2758kB/s-8380kB/s), io=20.6MiB (21.6MB), run=1001-1001msec 00:21:50.082 00:21:50.082 Disk stats (read/write): 00:21:50.082 nvme0n1: ios=615/1024, merge=0/0, ticks=619/192, in_queue=811, util=86.87% 00:21:50.082 nvme0n2: ios=1585/1703, merge=0/0, ticks=599/337, in_queue=936, util=89.10% 00:21:50.082 nvme0n3: ios=537/512, merge=0/0, ticks=772/102, in_queue=874, util=94.75% 00:21:50.082 nvme0n4: ios=941/1024, merge=0/0, ticks=660/198, in_queue=858, util=95.76% 00:21:50.082 08:17:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:50.082 [global] 00:21:50.082 thread=1 00:21:50.082 invalidate=1 00:21:50.082 rw=randwrite 00:21:50.082 time_based=1 00:21:50.082 runtime=1 00:21:50.082 ioengine=libaio 00:21:50.082 direct=1 00:21:50.082 bs=4096 00:21:50.082 iodepth=1 00:21:50.082 norandommap=0 00:21:50.082 numjobs=1 00:21:50.082 00:21:50.082 verify_dump=1 00:21:50.082 verify_backlog=512 00:21:50.082 verify_state_save=0 00:21:50.082 do_verify=1 00:21:50.082 verify=crc32c-intel 00:21:50.082 [job0] 00:21:50.082 filename=/dev/nvme0n1 00:21:50.082 [job1] 00:21:50.082 filename=/dev/nvme0n2 00:21:50.082 [job2] 00:21:50.082 filename=/dev/nvme0n3 00:21:50.082 [job3] 00:21:50.082 filename=/dev/nvme0n4 00:21:50.082 Could not set queue depth (nvme0n1) 00:21:50.082 Could not set queue depth (nvme0n2) 00:21:50.082 Could not set queue depth (nvme0n3) 00:21:50.082 Could not set queue depth (nvme0n4) 00:21:50.082 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:50.082 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:50.082 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:50.082 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:50.082 fio-3.35 00:21:50.082 Starting 4 threads 00:21:51.454 00:21:51.454 job0: (groupid=0, jobs=1): err= 0: pid=4122597: Sun Jul 21 08:18:00 2024 00:21:51.454 read: IOPS=507, BW=2031KiB/s (2080kB/s)(2104KiB/1036msec) 00:21:51.454 slat (nsec): min=5631, max=42502, avg=10775.18, stdev=5946.93 00:21:51.454 clat (usec): min=221, max=45011, avg=1509.18, stdev=6838.58 00:21:51.454 lat (usec): min=228, max=45031, avg=1519.95, stdev=6839.77 00:21:51.454 clat percentiles (usec): 00:21:51.454 | 1.00th=[ 227], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 269], 00:21:51.454 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 334], 00:21:51.454 | 70.00th=[ 371], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 545], 00:21:51.454 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:21:51.454 | 99.99th=[44827] 00:21:51.454 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:21:51.454 slat (nsec): min=7424, max=62855, avg=13654.98, stdev=7055.59 00:21:51.454 clat (usec): min=146, max=455, avg=211.63, stdev=36.72 00:21:51.454 lat (usec): min=156, max=463, avg=225.29, stdev=37.58 00:21:51.454 clat percentiles (usec): 00:21:51.454 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 182], 00:21:51.454 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 219], 00:21:51.454 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 273], 00:21:51.454 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 437], 99.95th=[ 457], 00:21:51.454 | 99.99th=[ 457] 00:21:51.454 bw ( KiB/s): min= 8192, max= 8192, per=42.47%, avg=8192.00, stdev= 0.00, samples=1 00:21:51.454 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:51.454 lat (usec) : 250=61.94%, 500=35.16%, 750=1.87% 00:21:51.454 lat (msec) : 4=0.06%, 50=0.97% 00:21:51.454 cpu : usr=1.55%, sys=2.22%, ctx=1551, majf=0, minf=2 00:21:51.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.454 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.454 job1: (groupid=0, jobs=1): err= 0: pid=4122598: Sun Jul 21 08:18:00 2024 00:21:51.454 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:21:51.454 slat (nsec): min=4680, max=56727, avg=9400.55, stdev=5209.84 00:21:51.454 clat (usec): min=191, max=42006, avg=388.71, stdev=2364.71 00:21:51.454 lat (usec): min=205, max=42012, avg=398.11, stdev=2364.74 00:21:51.454 clat percentiles (usec): 00:21:51.454 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:21:51.454 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:21:51.454 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 314], 00:21:51.454 | 99.00th=[ 400], 99.50th=[ 453], 99.90th=[42206], 99.95th=[42206], 00:21:51.454 | 99.99th=[42206] 00:21:51.454 write: IOPS=1931, BW=7724KiB/s (7910kB/s)(7732KiB/1001msec); 0 zone resets 00:21:51.454 slat (nsec): min=5619, max=56977, avg=11388.66, stdev=6371.60 00:21:51.454 clat (usec): min=134, max=451, avg=183.92, stdev=46.92 00:21:51.454 lat (usec): min=141, max=463, avg=195.31, stdev=50.28 00:21:51.454 clat percentiles (usec): 00:21:51.454 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:21:51.454 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:21:51.454 | 70.00th=[ 180], 80.00th=[ 223], 90.00th=[ 253], 95.00th=[ 293], 00:21:51.454 | 99.00th=[ 334], 99.50th=[ 375], 99.90th=[ 424], 99.95th=[ 453], 00:21:51.454 | 99.99th=[ 453] 00:21:51.454 bw ( KiB/s): min= 7912, max= 7912, per=41.02%, avg=7912.00, stdev= 0.00, samples=1 00:21:51.454 iops : min= 1978, max= 1978, avg=1978.00, stdev= 0.00, samples=1 00:21:51.454 lat (usec) : 250=79.62%, 500=20.21% 00:21:51.454 lat (msec) : 20=0.03%, 50=0.14% 00:21:51.454 cpu : usr=2.50%, sys=3.80%, ctx=3469, majf=0, minf=1 00:21:51.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.454 issued rwts: total=1536,1933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.455 job2: (groupid=0, jobs=1): err= 0: pid=4122599: Sun Jul 21 08:18:00 2024 00:21:51.455 read: IOPS=413, BW=1653KiB/s (1693kB/s)(1716KiB/1038msec) 00:21:51.455 slat (nsec): min=4819, max=48356, avg=12853.00, stdev=7597.87 00:21:51.455 clat (usec): min=203, max=42059, avg=2109.61, stdev=8234.36 00:21:51.455 lat (usec): min=208, max=42075, avg=2122.46, stdev=8234.85 00:21:51.455 clat percentiles (usec): 00:21:51.455 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 269], 00:21:51.455 | 30.00th=[ 297], 40.00th=[ 326], 50.00th=[ 359], 60.00th=[ 379], 00:21:51.455 | 70.00th=[ 412], 80.00th=[ 465], 90.00th=[ 519], 95.00th=[ 635], 00:21:51.455 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:51.455 | 99.99th=[42206] 00:21:51.455 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:21:51.455 slat (nsec): min=8632, max=40271, avg=10022.45, stdev=2192.46 00:21:51.455 clat (usec): min=172, max=774, avg=231.82, stdev=39.66 00:21:51.455 lat (usec): min=181, max=785, avg=241.85, stdev=39.85 00:21:51.455 clat percentiles (usec): 00:21:51.455 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 217], 00:21:51.455 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:21:51.455 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 255], 00:21:51.455 | 99.00th=[ 289], 99.50th=[ 553], 99.90th=[ 775], 99.95th=[ 775], 00:21:51.455 | 99.99th=[ 775] 00:21:51.455 bw ( KiB/s): min= 4096, max= 4096, per=21.24%, avg=4096.00, stdev= 0.00, samples=1 00:21:51.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:51.455 lat (usec) : 250=55.47%, 500=38.47%, 750=3.93%, 1000=0.11% 00:21:51.455 lat (msec) : 20=0.11%, 50=1.91% 00:21:51.455 cpu : usr=0.96%, sys=0.87%, ctx=943, majf=0, minf=1 00:21:51.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.455 issued rwts: total=429,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.455 job3: (groupid=0, jobs=1): err= 0: pid=4122600: Sun Jul 21 08:18:00 2024 00:21:51.455 read: IOPS=1068, BW=4276KiB/s (4378kB/s)(4280KiB/1001msec) 00:21:51.455 slat (nsec): min=4750, max=54533, avg=13537.57, stdev=8943.68 00:21:51.455 clat (usec): min=212, max=41364, avg=610.06, stdev=3504.46 00:21:51.455 lat (usec): min=222, max=41375, avg=623.60, stdev=3504.47 00:21:51.455 clat percentiles (usec): 00:21:51.455 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:21:51.455 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 289], 00:21:51.455 | 70.00th=[ 322], 80.00th=[ 363], 90.00th=[ 457], 95.00th=[ 482], 00:21:51.455 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:51.455 | 99.99th=[41157] 00:21:51.455 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:51.455 slat (nsec): min=5981, max=69247, avg=14000.17, stdev=7339.81 00:21:51.455 clat (usec): min=139, max=1087, avg=196.43, stdev=56.82 00:21:51.455 lat (usec): min=145, max=1094, avg=210.43, stdev=59.52 00:21:51.455 clat percentiles (usec): 00:21:51.455 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:21:51.455 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:21:51.455 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 297], 95.00th=[ 322], 00:21:51.455 | 99.00th=[ 351], 99.50th=[ 375], 99.90th=[ 408], 99.95th=[ 1090], 00:21:51.455 | 99.99th=[ 1090] 00:21:51.455 bw ( KiB/s): min= 4096, max= 4096, per=21.24%, avg=4096.00, stdev= 0.00, samples=1 00:21:51.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:51.455 lat (usec) : 250=60.86%, 500=38.07%, 750=0.73% 00:21:51.455 lat (msec) : 2=0.04%, 50=0.31% 00:21:51.455 cpu : usr=1.60%, sys=4.10%, ctx=2607, majf=0, minf=1 00:21:51.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.455 issued rwts: total=1070,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.455 00:21:51.455 Run status group 0 (all jobs): 00:21:51.455 READ: bw=13.4MiB/s (14.1MB/s), 1653KiB/s-6138KiB/s (1693kB/s-6285kB/s), io=13.9MiB (14.6MB), run=1001-1038msec 00:21:51.455 WRITE: bw=18.8MiB/s (19.7MB/s), 1973KiB/s-7724KiB/s (2020kB/s-7910kB/s), io=19.6MiB (20.5MB), run=1001-1038msec 00:21:51.455 00:21:51.455 Disk stats (read/write): 00:21:51.455 nvme0n1: ios=562/1024, merge=0/0, ticks=786/206, in_queue=992, util=98.70% 00:21:51.455 nvme0n2: ios=1211/1536, merge=0/0, ticks=510/287, in_queue=797, util=86.57% 00:21:51.455 nvme0n3: ios=450/512, merge=0/0, ticks=1683/119, in_queue=1802, util=97.81% 00:21:51.455 nvme0n4: ios=1072/1038, merge=0/0, ticks=1234/182, in_queue=1416, util=98.63% 00:21:51.455 08:18:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:51.455 [global] 00:21:51.455 thread=1 00:21:51.455 invalidate=1 00:21:51.455 rw=write 00:21:51.455 time_based=1 00:21:51.455 runtime=1 00:21:51.455 ioengine=libaio 00:21:51.455 direct=1 00:21:51.455 bs=4096 00:21:51.455 iodepth=128 00:21:51.455 norandommap=0 00:21:51.455 numjobs=1 00:21:51.455 00:21:51.455 verify_dump=1 00:21:51.455 verify_backlog=512 00:21:51.455 verify_state_save=0 00:21:51.455 do_verify=1 00:21:51.455 verify=crc32c-intel 00:21:51.455 [job0] 00:21:51.455 filename=/dev/nvme0n1 00:21:51.455 [job1] 00:21:51.455 filename=/dev/nvme0n2 00:21:51.455 [job2] 00:21:51.455 filename=/dev/nvme0n3 00:21:51.455 [job3] 00:21:51.455 filename=/dev/nvme0n4 00:21:51.455 Could not set queue depth (nvme0n1) 00:21:51.455 Could not set queue depth (nvme0n2) 00:21:51.455 Could not set queue depth (nvme0n3) 00:21:51.455 Could not set queue depth (nvme0n4) 00:21:51.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:51.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:51.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:51.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:51.712 fio-3.35 00:21:51.712 Starting 4 threads 00:21:53.085 00:21:53.085 job0: (groupid=0, jobs=1): err= 0: pid=4122950: Sun Jul 21 08:18:02 2024 00:21:53.085 read: IOPS=5691, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1008msec) 00:21:53.085 slat (usec): min=2, max=9653, avg=83.51, stdev=584.86 00:21:53.085 clat (usec): min=3531, max=23263, avg=11128.32, stdev=3027.45 00:21:53.085 lat (usec): min=3611, max=23267, avg=11211.83, stdev=3054.78 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 6194], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8717], 00:21:53.085 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[10945], 00:21:53.085 | 70.00th=[11600], 80.00th=[12911], 90.00th=[15008], 95.00th=[17433], 00:21:53.085 | 99.00th=[21103], 99.50th=[21103], 99.90th=[23200], 99.95th=[23200], 00:21:53.085 | 99.99th=[23200] 00:21:53.085 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:21:53.085 slat (usec): min=3, max=10569, avg=73.74, stdev=474.38 00:21:53.085 clat (usec): min=1047, max=36538, avg=10430.73, stdev=4234.99 00:21:53.085 lat (usec): min=1063, max=36592, avg=10504.47, stdev=4265.52 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 3523], 5.00th=[ 5407], 10.00th=[ 6718], 20.00th=[ 8160], 00:21:53.085 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:21:53.085 | 70.00th=[10945], 80.00th=[11207], 90.00th=[13698], 95.00th=[18744], 00:21:53.085 | 99.00th=[33817], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:21:53.085 | 99.99th=[36439] 00:21:53.085 bw ( KiB/s): min=24368, max=24600, per=41.96%, avg=24484.00, stdev=164.05, samples=2 00:21:53.085 iops : min= 6092, max= 6150, avg=6121.00, stdev=41.01, samples=2 00:21:53.085 lat (msec) : 2=0.06%, 4=0.82%, 10=46.57%, 20=49.47%, 50=3.08% 00:21:53.085 cpu : usr=7.35%, sys=7.65%, ctx=578, majf=0, minf=1 00:21:53.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.085 issued rwts: total=5737,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.085 job1: (groupid=0, jobs=1): err= 0: pid=4122970: Sun Jul 21 08:18:02 2024 00:21:53.085 read: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1008msec) 00:21:53.085 slat (usec): min=2, max=17772, avg=143.85, stdev=1017.42 00:21:53.085 clat (usec): min=3111, max=62218, avg=18482.62, stdev=11807.50 00:21:53.085 lat (usec): min=5404, max=62224, avg=18626.47, stdev=11880.07 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[11076], 00:21:53.085 | 30.00th=[11731], 40.00th=[13829], 50.00th=[14353], 60.00th=[15401], 00:21:53.085 | 70.00th=[16909], 80.00th=[23725], 90.00th=[38011], 95.00th=[50594], 00:21:53.085 | 99.00th=[57410], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:21:53.085 | 99.99th=[62129] 00:21:53.085 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:21:53.085 slat (usec): min=4, max=18982, avg=192.56, stdev=1164.59 00:21:53.085 clat (usec): min=1152, max=68696, avg=25204.11, stdev=12986.06 00:21:53.085 lat (usec): min=1161, max=68705, avg=25396.67, stdev=13078.33 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 6849], 5.00th=[11076], 10.00th=[11469], 20.00th=[13698], 00:21:53.085 | 30.00th=[14484], 40.00th=[19268], 50.00th=[23462], 60.00th=[24249], 00:21:53.085 | 70.00th=[30802], 80.00th=[36963], 90.00th=[41681], 95.00th=[50070], 00:21:53.085 | 99.00th=[62653], 99.50th=[63701], 99.90th=[68682], 99.95th=[68682], 00:21:53.085 | 99.99th=[68682] 00:21:53.085 bw ( KiB/s): min=11776, max=12288, per=20.62%, avg=12032.00, stdev=362.04, samples=2 00:21:53.085 iops : min= 2944, max= 3072, avg=3008.00, stdev=90.51, samples=2 00:21:53.085 lat (msec) : 2=0.09%, 4=0.26%, 10=6.13%, 20=50.98%, 50=37.27% 00:21:53.085 lat (msec) : 100=5.27% 00:21:53.085 cpu : usr=2.18%, sys=5.36%, ctx=271, majf=0, minf=1 00:21:53.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:53.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.085 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.085 job2: (groupid=0, jobs=1): err= 0: pid=4123003: Sun Jul 21 08:18:02 2024 00:21:53.085 read: IOPS=2764, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1008msec) 00:21:53.085 slat (usec): min=2, max=23933, avg=167.89, stdev=1200.32 00:21:53.085 clat (usec): min=1815, max=72310, avg=22049.20, stdev=12375.19 00:21:53.085 lat (usec): min=5114, max=73289, avg=22217.09, stdev=12421.78 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 9503], 5.00th=[12387], 10.00th=[12387], 20.00th=[13173], 00:21:53.085 | 30.00th=[13829], 40.00th=[15795], 50.00th=[19792], 60.00th=[20055], 00:21:53.085 | 70.00th=[21365], 80.00th=[25560], 90.00th=[46400], 95.00th=[50594], 00:21:53.085 | 99.00th=[63701], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:21:53.085 | 99.99th=[71828] 00:21:53.085 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:21:53.085 slat (usec): min=3, max=19808, avg=146.38, stdev=1108.05 00:21:53.085 clat (usec): min=927, max=63277, avg=20270.14, stdev=14040.56 00:21:53.085 lat (usec): min=934, max=63283, avg=20416.51, stdev=14110.88 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[ 2376], 5.00th=[ 3884], 10.00th=[ 5342], 20.00th=[ 8586], 00:21:53.085 | 30.00th=[ 9503], 40.00th=[13829], 50.00th=[15270], 60.00th=[18744], 00:21:53.085 | 70.00th=[23987], 80.00th=[33817], 90.00th=[43779], 95.00th=[46400], 00:21:53.085 | 99.00th=[54264], 99.50th=[57410], 99.90th=[62129], 99.95th=[62129], 00:21:53.085 | 99.99th=[63177] 00:21:53.085 bw ( KiB/s): min=10456, max=14120, per=21.06%, avg=12288.00, stdev=2590.84, samples=2 00:21:53.085 iops : min= 2614, max= 3530, avg=3072.00, stdev=647.71, samples=2 00:21:53.085 lat (usec) : 1000=0.10% 00:21:53.085 lat (msec) : 2=0.36%, 4=2.36%, 10=14.76%, 20=41.80%, 50=37.57% 00:21:53.085 lat (msec) : 100=3.06% 00:21:53.085 cpu : usr=2.18%, sys=2.68%, ctx=247, majf=0, minf=1 00:21:53.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:21:53.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.085 issued rwts: total=2787,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.085 job3: (groupid=0, jobs=1): err= 0: pid=4123016: Sun Jul 21 08:18:02 2024 00:21:53.085 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:21:53.085 slat (usec): min=3, max=13533, avg=204.26, stdev=1158.38 00:21:53.085 clat (usec): min=9059, max=55536, avg=23283.92, stdev=9489.44 00:21:53.085 lat (usec): min=9066, max=55545, avg=23488.18, stdev=9606.03 00:21:53.085 clat percentiles (usec): 00:21:53.085 | 1.00th=[10945], 5.00th=[12911], 10.00th=[13173], 20.00th=[14615], 00:21:53.085 | 30.00th=[15795], 40.00th=[17957], 50.00th=[21890], 60.00th=[23462], 00:21:53.085 | 70.00th=[27132], 80.00th=[31851], 90.00th=[39584], 95.00th=[40109], 00:21:53.085 | 99.00th=[46924], 99.50th=[49546], 99.90th=[55313], 99.95th=[55313], 00:21:53.085 | 99.99th=[55313] 00:21:53.085 write: IOPS=2409, BW=9637KiB/s (9869kB/s)(9724KiB/1009msec); 0 zone resets 00:21:53.085 slat (usec): min=5, max=14063, avg=230.81, stdev=1000.84 00:21:53.085 clat (usec): min=5434, max=83903, avg=32804.26, stdev=15529.78 00:21:53.085 lat (usec): min=7042, max=83912, avg=33035.06, stdev=15622.47 00:21:53.085 clat percentiles (usec): 00:21:53.086 | 1.00th=[ 8291], 5.00th=[11338], 10.00th=[17433], 20.00th=[21890], 00:21:53.086 | 30.00th=[23725], 40.00th=[25560], 50.00th=[28705], 60.00th=[31851], 00:21:53.086 | 70.00th=[38011], 80.00th=[43254], 90.00th=[54264], 95.00th=[68682], 00:21:53.086 | 99.00th=[80217], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:21:53.086 | 99.99th=[84411] 00:21:53.086 bw ( KiB/s): min= 7552, max=10872, per=15.79%, avg=9212.00, stdev=2347.59, samples=2 00:21:53.086 iops : min= 1888, max= 2718, avg=2303.00, stdev=586.90, samples=2 00:21:53.086 lat (msec) : 10=1.74%, 20=27.42%, 50=63.94%, 100=6.90% 00:21:53.086 cpu : usr=2.88%, sys=4.46%, ctx=324, majf=0, minf=1 00:21:53.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:53.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.086 issued rwts: total=2048,2431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.086 00:21:53.086 Run status group 0 (all jobs): 00:21:53.086 READ: bw=51.1MiB/s (53.6MB/s), 8119KiB/s-22.2MiB/s (8314kB/s-23.3MB/s), io=51.5MiB (54.1MB), run=1008-1009msec 00:21:53.086 WRITE: bw=57.0MiB/s (59.8MB/s), 9637KiB/s-23.8MiB/s (9869kB/s-25.0MB/s), io=57.5MiB (60.3MB), run=1008-1009msec 00:21:53.086 00:21:53.086 Disk stats (read/write): 00:21:53.086 nvme0n1: ios=4967/5120, merge=0/0, ticks=46968/42888, in_queue=89856, util=86.57% 00:21:53.086 nvme0n2: ios=2099/2364, merge=0/0, ticks=29054/49563, in_queue=78617, util=95.93% 00:21:53.086 nvme0n3: ios=2474/2560, merge=0/0, ticks=27657/24505, in_queue=52162, util=99.27% 00:21:53.086 nvme0n4: ios=1954/2048, merge=0/0, ticks=22683/29897, in_queue=52580, util=97.25% 00:21:53.086 08:18:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:53.086 [global] 00:21:53.086 thread=1 00:21:53.086 invalidate=1 00:21:53.086 rw=randwrite 00:21:53.086 time_based=1 00:21:53.086 runtime=1 00:21:53.086 ioengine=libaio 00:21:53.086 direct=1 00:21:53.086 bs=4096 00:21:53.086 iodepth=128 00:21:53.086 norandommap=0 00:21:53.086 numjobs=1 00:21:53.086 00:21:53.086 verify_dump=1 00:21:53.086 verify_backlog=512 00:21:53.086 verify_state_save=0 00:21:53.086 do_verify=1 00:21:53.086 verify=crc32c-intel 00:21:53.086 [job0] 00:21:53.086 filename=/dev/nvme0n1 00:21:53.086 [job1] 00:21:53.086 filename=/dev/nvme0n2 00:21:53.086 [job2] 00:21:53.086 filename=/dev/nvme0n3 00:21:53.086 [job3] 00:21:53.086 filename=/dev/nvme0n4 00:21:53.086 Could not set queue depth (nvme0n1) 00:21:53.086 Could not set queue depth (nvme0n2) 00:21:53.086 Could not set queue depth (nvme0n3) 00:21:53.086 Could not set queue depth (nvme0n4) 00:21:53.086 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:53.086 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:53.086 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:53.086 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:53.086 fio-3.35 00:21:53.086 Starting 4 threads 00:21:54.460 00:21:54.460 job0: (groupid=0, jobs=1): err= 0: pid=4123300: Sun Jul 21 08:18:03 2024 00:21:54.460 read: IOPS=5220, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1002msec) 00:21:54.460 slat (usec): min=2, max=9497, avg=91.21, stdev=551.14 00:21:54.460 clat (usec): min=581, max=50090, avg=11448.67, stdev=3192.79 00:21:54.460 lat (usec): min=2300, max=53492, avg=11539.89, stdev=3225.51 00:21:54.460 clat percentiles (usec): 00:21:54.460 | 1.00th=[ 4686], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9241], 00:21:54.460 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:21:54.460 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14091], 95.00th=[15401], 00:21:54.460 | 99.00th=[18482], 99.50th=[19792], 99.90th=[50070], 99.95th=[50070], 00:21:54.460 | 99.99th=[50070] 00:21:54.460 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:21:54.460 slat (usec): min=3, max=21182, avg=87.41, stdev=577.68 00:21:54.460 clat (usec): min=794, max=50427, avg=11944.80, stdev=5081.79 00:21:54.460 lat (usec): min=801, max=51071, avg=12032.21, stdev=5116.43 00:21:54.460 clat percentiles (usec): 00:21:54.460 | 1.00th=[ 4424], 5.00th=[ 6587], 10.00th=[ 8029], 20.00th=[ 9503], 00:21:54.460 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:21:54.460 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[22152], 00:21:54.460 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41681], 99.95th=[42730], 00:21:54.460 | 99.99th=[50594] 00:21:54.460 bw ( KiB/s): min=22200, max=22720, per=33.61%, avg=22460.00, stdev=367.70, samples=2 00:21:54.460 iops : min= 5550, max= 5680, avg=5615.00, stdev=91.92, samples=2 00:21:54.460 lat (usec) : 750=0.01%, 1000=0.02% 00:21:54.460 lat (msec) : 4=0.55%, 10=27.87%, 20=68.64%, 50=2.83%, 100=0.09% 00:21:54.460 cpu : usr=4.30%, sys=5.19%, ctx=594, majf=0, minf=9 00:21:54.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:54.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:54.461 issued rwts: total=5231,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:54.461 job1: (groupid=0, jobs=1): err= 0: pid=4123303: Sun Jul 21 08:18:03 2024 00:21:54.461 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:21:54.461 slat (usec): min=2, max=16102, avg=116.97, stdev=944.66 00:21:54.461 clat (usec): min=4200, max=40933, avg=16134.21, stdev=5855.25 00:21:54.461 lat (usec): min=4207, max=40939, avg=16251.17, stdev=5934.00 00:21:54.461 clat percentiles (usec): 00:21:54.461 | 1.00th=[ 7701], 5.00th=[10421], 10.00th=[11076], 20.00th=[11338], 00:21:54.461 | 30.00th=[12125], 40.00th=[13304], 50.00th=[14091], 60.00th=[15139], 00:21:54.461 | 70.00th=[18220], 80.00th=[22414], 90.00th=[25297], 95.00th=[27919], 00:21:54.461 | 99.00th=[31851], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:21:54.461 | 99.99th=[41157] 00:21:54.461 write: IOPS=3733, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1015msec); 0 zone resets 00:21:54.461 slat (usec): min=3, max=18714, avg=121.14, stdev=946.84 00:21:54.461 clat (usec): min=1429, max=104728, avg=18685.65, stdev=16519.69 00:21:54.461 lat (usec): min=1438, max=104738, avg=18806.79, stdev=16623.54 00:21:54.461 clat percentiles (msec): 00:21:54.461 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:21:54.461 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:21:54.461 | 70.00th=[ 19], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 53], 00:21:54.461 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 105], 99.95th=[ 105], 00:21:54.461 | 99.99th=[ 105] 00:21:54.461 bw ( KiB/s): min=12288, max=17008, per=21.92%, avg=14648.00, stdev=3337.54, samples=2 00:21:54.461 iops : min= 3072, max= 4252, avg=3662.00, stdev=834.39, samples=2 00:21:54.461 lat (msec) : 2=0.04%, 4=0.26%, 10=11.85%, 20=63.44%, 50=21.26% 00:21:54.461 lat (msec) : 100=2.83%, 250=0.31% 00:21:54.461 cpu : usr=3.55%, sys=6.21%, ctx=258, majf=0, minf=13 00:21:54.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:54.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:54.461 issued rwts: total=3584,3790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:54.461 job2: (groupid=0, jobs=1): err= 0: pid=4123304: Sun Jul 21 08:18:03 2024 00:21:54.461 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:21:54.461 slat (usec): min=2, max=19137, avg=112.92, stdev=849.15 00:21:54.461 clat (usec): min=2033, max=46307, avg=15011.82, stdev=4995.25 00:21:54.461 lat (usec): min=2040, max=46309, avg=15124.74, stdev=5061.74 00:21:54.461 clat percentiles (usec): 00:21:54.461 | 1.00th=[ 4178], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[12649], 00:21:54.461 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:21:54.461 | 70.00th=[15139], 80.00th=[17171], 90.00th=[22938], 95.00th=[24249], 00:21:54.461 | 99.00th=[28705], 99.50th=[29492], 99.90th=[43779], 99.95th=[43779], 00:21:54.461 | 99.99th=[46400] 00:21:54.461 write: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1008msec); 0 zone resets 00:21:54.461 slat (usec): min=3, max=34140, avg=134.24, stdev=1091.79 00:21:54.461 clat (msec): min=3, max=130, avg=18.70, stdev=17.13 00:21:54.461 lat (msec): min=3, max=130, avg=18.83, stdev=17.23 00:21:54.461 clat percentiles (msec): 00:21:54.461 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:21:54.461 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:21:54.461 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 26], 95.00th=[ 38], 00:21:54.461 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 131], 99.95th=[ 131], 00:21:54.461 | 99.99th=[ 131] 00:21:54.461 bw ( KiB/s): min=12312, max=18312, per=22.91%, avg=15312.00, stdev=4242.64, samples=2 00:21:54.461 iops : min= 3078, max= 4578, avg=3828.00, stdev=1060.66, samples=2 00:21:54.461 lat (msec) : 4=0.36%, 10=8.04%, 20=73.40%, 50=16.09%, 100=1.29% 00:21:54.461 lat (msec) : 250=0.82% 00:21:54.461 cpu : usr=2.78%, sys=5.06%, ctx=349, majf=0, minf=15 00:21:54.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:54.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:54.461 issued rwts: total=3584,3953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:54.461 job3: (groupid=0, jobs=1): err= 0: pid=4123305: Sun Jul 21 08:18:03 2024 00:21:54.461 read: IOPS=3096, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1010msec) 00:21:54.461 slat (usec): min=2, max=25865, avg=121.22, stdev=971.11 00:21:54.461 clat (usec): min=3984, max=57706, avg=15288.74, stdev=7486.74 00:21:54.461 lat (usec): min=3989, max=57710, avg=15409.96, stdev=7535.86 00:21:54.461 clat percentiles (usec): 00:21:54.461 | 1.00th=[ 3982], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:21:54.461 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13566], 60.00th=[13829], 00:21:54.461 | 70.00th=[14484], 80.00th=[16712], 90.00th=[24511], 95.00th=[30016], 00:21:54.461 | 99.00th=[49546], 99.50th=[49546], 99.90th=[57410], 99.95th=[57934], 00:21:54.461 | 99.99th=[57934] 00:21:54.461 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:21:54.461 slat (usec): min=3, max=12685, avg=153.45, stdev=974.38 00:21:54.461 clat (usec): min=1278, max=112710, avg=22410.19, stdev=25124.03 00:21:54.461 lat (usec): min=1282, max=112723, avg=22563.64, stdev=25305.70 00:21:54.461 clat percentiles (msec): 00:21:54.461 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 12], 00:21:54.461 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:21:54.461 | 70.00th=[ 16], 80.00th=[ 21], 90.00th=[ 58], 95.00th=[ 97], 00:21:54.461 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 113], 99.95th=[ 113], 00:21:54.461 | 99.99th=[ 113] 00:21:54.461 bw ( KiB/s): min= 7984, max=20104, per=21.01%, avg=14044.00, stdev=8570.13, samples=2 00:21:54.461 iops : min= 1996, max= 5026, avg=3511.00, stdev=2142.53, samples=2 00:21:54.461 lat (msec) : 2=0.42%, 4=1.12%, 10=10.12%, 20=70.59%, 50=11.91% 00:21:54.461 lat (msec) : 100=3.65%, 250=2.21% 00:21:54.461 cpu : usr=2.68%, sys=4.26%, ctx=274, majf=0, minf=15 00:21:54.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:54.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:54.461 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:54.461 00:21:54.461 Run status group 0 (all jobs): 00:21:54.461 READ: bw=59.8MiB/s (62.7MB/s), 12.1MiB/s-20.4MiB/s (12.7MB/s-21.4MB/s), io=60.6MiB (63.6MB), run=1002-1015msec 00:21:54.461 WRITE: bw=65.3MiB/s (68.4MB/s), 13.9MiB/s-22.0MiB/s (14.5MB/s-23.0MB/s), io=66.2MiB (69.5MB), run=1002-1015msec 00:21:54.461 00:21:54.461 Disk stats (read/write): 00:21:54.461 nvme0n1: ios=4454/4608, merge=0/0, ticks=33990/37422, in_queue=71412, util=98.80% 00:21:54.461 nvme0n2: ios=2873/3169, merge=0/0, ticks=46969/57784, in_queue=104753, util=98.37% 00:21:54.461 nvme0n3: ios=3092/3119, merge=0/0, ticks=38547/52278, in_queue=90825, util=98.22% 00:21:54.461 nvme0n4: ios=3086/3367, merge=0/0, ticks=34341/40970, in_queue=75311, util=89.99% 00:21:54.461 08:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:21:54.461 08:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4123446 00:21:54.461 08:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:54.461 08:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:21:54.461 [global] 00:21:54.461 thread=1 00:21:54.461 invalidate=1 00:21:54.461 rw=read 00:21:54.461 time_based=1 00:21:54.461 runtime=10 00:21:54.461 ioengine=libaio 00:21:54.461 direct=1 00:21:54.461 bs=4096 00:21:54.461 iodepth=1 00:21:54.461 norandommap=1 00:21:54.461 numjobs=1 00:21:54.461 00:21:54.461 [job0] 00:21:54.461 filename=/dev/nvme0n1 00:21:54.461 [job1] 00:21:54.461 filename=/dev/nvme0n2 00:21:54.461 [job2] 00:21:54.461 filename=/dev/nvme0n3 00:21:54.461 [job3] 00:21:54.461 filename=/dev/nvme0n4 00:21:54.461 Could not set queue depth (nvme0n1) 00:21:54.461 Could not set queue depth (nvme0n2) 00:21:54.461 Could not set queue depth (nvme0n3) 00:21:54.461 Could not set queue depth (nvme0n4) 00:21:54.461 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.461 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.461 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.461 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.461 fio-3.35 00:21:54.461 Starting 4 threads 00:21:57.747 08:18:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:57.747 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:57.747 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9220096, buflen=4096 00:21:57.747 fio: pid=4123542, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:58.006 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:58.006 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:58.006 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=380928, buflen=4096 00:21:58.006 fio: pid=4123541, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:58.263 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:58.263 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:58.263 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35860480, buflen=4096 00:21:58.263 fio: pid=4123539, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:58.520 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55128064, buflen=4096 00:21:58.520 fio: pid=4123540, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:58.520 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:58.520 08:18:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:58.520 00:21:58.520 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4123539: Sun Jul 21 08:18:07 2024 00:21:58.520 read: IOPS=2543, BW=9.94MiB/s (10.4MB/s)(34.2MiB/3442msec) 00:21:58.520 slat (usec): min=5, max=13896, avg=13.50, stdev=176.26 00:21:58.520 clat (usec): min=222, max=44117, avg=374.20, stdev=2065.62 00:21:58.520 lat (usec): min=228, max=54976, avg=387.70, stdev=2125.23 00:21:58.520 clat percentiles (usec): 00:21:58.520 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:21:58.520 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:21:58.520 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:21:58.520 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[41681], 99.95th=[42206], 00:21:58.520 | 99.99th=[44303] 00:21:58.520 bw ( KiB/s): min= 696, max=15288, per=44.01%, avg=11660.00, stdev=5424.04, samples=6 00:21:58.520 iops : min= 174, max= 3822, avg=2915.00, stdev=1356.01, samples=6 00:21:58.520 lat (usec) : 250=18.70%, 500=80.96%, 750=0.05%, 1000=0.03% 00:21:58.520 lat (msec) : 50=0.25% 00:21:58.520 cpu : usr=2.01%, sys=4.18%, ctx=8759, majf=0, minf=1 00:21:58.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 issued rwts: total=8756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:58.520 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4123540: Sun Jul 21 08:18:07 2024 00:21:58.520 read: IOPS=3629, BW=14.2MiB/s (14.9MB/s)(52.6MiB/3708msec) 00:21:58.520 slat (usec): min=5, max=11607, avg=12.19, stdev=160.43 00:21:58.520 clat (usec): min=196, max=1751, avg=259.12, stdev=36.54 00:21:58.520 lat (usec): min=203, max=11889, avg=271.31, stdev=165.15 00:21:58.520 clat percentiles (usec): 00:21:58.520 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:21:58.520 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:21:58.520 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:21:58.520 | 99.00th=[ 326], 99.50th=[ 445], 99.90th=[ 627], 99.95th=[ 799], 00:21:58.520 | 99.99th=[ 1270] 00:21:58.520 bw ( KiB/s): min=13432, max=15656, per=54.93%, avg=14553.71, stdev=899.05, samples=7 00:21:58.520 iops : min= 3358, max= 3914, avg=3638.43, stdev=224.76, samples=7 00:21:58.520 lat (usec) : 250=42.19%, 500=57.56%, 750=0.18%, 1000=0.04% 00:21:58.520 lat (msec) : 2=0.02% 00:21:58.520 cpu : usr=1.92%, sys=5.83%, ctx=13465, majf=0, minf=1 00:21:58.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 issued rwts: total=13460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:58.520 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4123541: Sun Jul 21 08:18:07 2024 00:21:58.520 read: IOPS=29, BW=117KiB/s (119kB/s)(372KiB/3191msec) 00:21:58.520 slat (nsec): min=12014, max=47106, avg=20885.48, stdev=9657.77 00:21:58.520 clat (usec): min=405, max=42003, avg=34048.16, stdev=15368.42 00:21:58.520 lat (usec): min=419, max=42017, avg=34069.11, stdev=15368.58 00:21:58.520 clat percentiles (usec): 00:21:58.520 | 1.00th=[ 408], 5.00th=[ 482], 10.00th=[ 537], 20.00th=[40633], 00:21:58.520 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:58.520 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:58.520 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:58.520 | 99.99th=[42206] 00:21:58.520 bw ( KiB/s): min= 96, max= 144, per=0.44%, avg=117.33, stdev=21.27, samples=6 00:21:58.520 iops : min= 24, max= 36, avg=29.33, stdev= 5.32, samples=6 00:21:58.520 lat (usec) : 500=8.51%, 750=8.51% 00:21:58.520 lat (msec) : 50=81.91% 00:21:58.520 cpu : usr=0.13%, sys=0.00%, ctx=95, majf=0, minf=1 00:21:58.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:58.520 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4123542: Sun Jul 21 08:18:07 2024 00:21:58.520 read: IOPS=775, BW=3102KiB/s (3176kB/s)(9004KiB/2903msec) 00:21:58.520 slat (nsec): min=5419, max=45864, avg=7585.16, stdev=3765.75 00:21:58.520 clat (usec): min=258, max=41048, avg=1267.51, stdev=5984.01 00:21:58.520 lat (usec): min=264, max=41066, avg=1275.09, stdev=5986.35 00:21:58.520 clat percentiles (usec): 00:21:58.520 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 322], 00:21:58.520 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 371], 00:21:58.520 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 490], 00:21:58.520 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:58.520 | 99.99th=[41157] 00:21:58.520 bw ( KiB/s): min= 96, max= 9544, per=7.50%, avg=1987.20, stdev=4224.38, samples=5 00:21:58.520 iops : min= 24, max= 2386, avg=496.80, stdev=1056.10, samples=5 00:21:58.520 lat (usec) : 500=95.29%, 750=2.31%, 1000=0.09% 00:21:58.520 lat (msec) : 2=0.04%, 50=2.22% 00:21:58.520 cpu : usr=0.28%, sys=0.96%, ctx=2253, majf=0, minf=1 00:21:58.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.520 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:58.520 00:21:58.520 Run status group 0 (all jobs): 00:21:58.520 READ: bw=25.9MiB/s (27.1MB/s), 117KiB/s-14.2MiB/s (119kB/s-14.9MB/s), io=95.9MiB (101MB), run=2903-3708msec 00:21:58.520 00:21:58.520 Disk stats (read/write): 00:21:58.520 nvme0n1: ios=8753/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.57% 00:21:58.520 nvme0n2: ios=13146/0, merge=0/0, ticks=4083/0, in_queue=4083, util=99.09% 00:21:58.520 nvme0n3: ios=91/0, merge=0/0, ticks=3087/0, in_queue=3087, util=96.79% 00:21:58.520 nvme0n4: ios=2225/0, merge=0/0, ticks=4025/0, in_queue=4025, util=99.76% 00:21:58.777 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:58.777 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:59.033 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:59.033 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:59.290 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:59.290 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:59.546 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:59.546 08:18:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4123446 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:59.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:59.804 nvmf hotplug test: fio failed as expected 00:21:59.804 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.061 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.061 rmmod nvme_tcp 00:22:00.061 rmmod nvme_fabrics 00:22:00.321 rmmod nvme_keyring 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4121299 ']' 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4121299 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4121299 ']' 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4121299 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121299 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121299' 00:22:00.321 killing process with pid 4121299 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4121299 00:22:00.321 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4121299 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.638 08:18:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.545 08:18:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:02.545 00:22:02.545 real 0m23.219s 00:22:02.545 user 1m19.992s 00:22:02.545 sys 0m7.270s 00:22:02.545 08:18:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:02.545 08:18:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.545 ************************************ 00:22:02.545 END TEST nvmf_fio_target 00:22:02.545 ************************************ 00:22:02.545 08:18:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:02.545 08:18:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:02.545 08:18:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:02.545 08:18:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.545 08:18:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.545 ************************************ 00:22:02.545 START TEST nvmf_bdevio 00:22:02.545 ************************************ 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:02.545 * Looking for test storage... 00:22:02.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.545 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.546 08:18:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:04.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:04.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.448 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:04.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:04.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.449 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:22:04.706 00:22:04.706 --- 10.0.0.2 ping statistics --- 00:22:04.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.706 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:04.706 00:22:04.706 --- 10.0.0.1 ping statistics --- 00:22:04.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.706 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4126662 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4126662 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4126662 ']' 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.706 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:04.706 [2024-07-21 08:18:14.288263] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:22:04.706 [2024-07-21 08:18:14.288341] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.706 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.962 [2024-07-21 08:18:14.357661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.963 [2024-07-21 08:18:14.452876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.963 [2024-07-21 08:18:14.452944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.963 [2024-07-21 08:18:14.452960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.963 [2024-07-21 08:18:14.452974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.963 [2024-07-21 08:18:14.452985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.963 [2024-07-21 08:18:14.453075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:04.963 [2024-07-21 08:18:14.453141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:04.963 [2024-07-21 08:18:14.453206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:04.963 [2024-07-21 08:18:14.453209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.963 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.963 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:22:04.963 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.963 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.963 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 [2024-07-21 08:18:14.606294] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 Malloc0 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:05.220 [2024-07-21 08:18:14.657317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.220 { 00:22:05.220 "params": { 00:22:05.220 "name": "Nvme$subsystem", 00:22:05.220 "trtype": "$TEST_TRANSPORT", 00:22:05.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.220 "adrfam": "ipv4", 00:22:05.220 "trsvcid": "$NVMF_PORT", 00:22:05.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.220 "hdgst": ${hdgst:-false}, 00:22:05.220 "ddgst": ${ddgst:-false} 00:22:05.220 }, 00:22:05.220 "method": "bdev_nvme_attach_controller" 00:22:05.220 } 00:22:05.220 EOF 00:22:05.220 )") 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:05.220 08:18:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:05.220 "params": { 00:22:05.220 "name": "Nvme1", 00:22:05.220 "trtype": "tcp", 00:22:05.220 "traddr": "10.0.0.2", 00:22:05.220 "adrfam": "ipv4", 00:22:05.220 "trsvcid": "4420", 00:22:05.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.220 "hdgst": false, 00:22:05.220 "ddgst": false 00:22:05.220 }, 00:22:05.220 "method": "bdev_nvme_attach_controller" 00:22:05.220 }' 00:22:05.220 [2024-07-21 08:18:14.699698] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:22:05.220 [2024-07-21 08:18:14.699773] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126696 ] 00:22:05.220 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.220 [2024-07-21 08:18:14.759928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.220 [2024-07-21 08:18:14.848466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.220 [2024-07-21 08:18:14.848494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.220 [2024-07-21 08:18:14.848497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.478 I/O targets: 00:22:05.478 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:05.478 00:22:05.478 00:22:05.478 CUnit - A unit testing framework for C - Version 2.1-3 00:22:05.478 http://cunit.sourceforge.net/ 00:22:05.478 00:22:05.478 00:22:05.478 Suite: bdevio tests on: Nvme1n1 00:22:05.737 Test: blockdev write read block ...passed 00:22:05.737 Test: blockdev write zeroes read block ...passed 00:22:05.737 Test: blockdev write zeroes read no split ...passed 00:22:05.737 Test: blockdev write zeroes read split ...passed 00:22:05.737 Test: blockdev write zeroes read split partial ...passed 00:22:05.737 Test: blockdev reset ...[2024-07-21 08:18:15.226697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.737 [2024-07-21 08:18:15.226814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2ea60 (9): Bad file descriptor 00:22:05.996 [2024-07-21 08:18:15.369510] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:05.996 passed 00:22:05.996 Test: blockdev write read 8 blocks ...passed 00:22:05.996 Test: blockdev write read size > 128k ...passed 00:22:05.996 Test: blockdev write read invalid size ...passed 00:22:05.996 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:05.996 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:05.996 Test: blockdev write read max offset ...passed 00:22:05.996 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:05.996 Test: blockdev writev readv 8 blocks ...passed 00:22:05.996 Test: blockdev writev readv 30 x 1block ...passed 00:22:05.996 Test: blockdev writev readv block ...passed 00:22:05.996 Test: blockdev writev readv size > 128k ...passed 00:22:05.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:05.997 Test: blockdev comparev and writev ...[2024-07-21 08:18:15.582149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.582186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.582211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.582228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.582600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.582630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.582657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.582994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.583019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.583041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.583058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.583386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.583411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.997 [2024-07-21 08:18:15.583433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:05.997 [2024-07-21 08:18:15.583449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.997 passed 00:22:06.256 Test: blockdev nvme passthru rw ...passed 00:22:06.256 Test: blockdev nvme passthru vendor specific ...[2024-07-21 08:18:15.665879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.256 [2024-07-21 08:18:15.665909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:06.256 [2024-07-21 08:18:15.666062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.256 [2024-07-21 08:18:15.666085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:06.256 [2024-07-21 08:18:15.666231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.256 [2024-07-21 08:18:15.666255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:06.256 [2024-07-21 08:18:15.666404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.256 [2024-07-21 08:18:15.666428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:06.256 passed 00:22:06.256 Test: blockdev nvme admin passthru ...passed 00:22:06.256 Test: blockdev copy ...passed 00:22:06.256 00:22:06.256 Run Summary: Type Total Ran Passed Failed Inactive 00:22:06.256 suites 1 1 n/a 0 0 00:22:06.256 tests 23 23 23 0 0 00:22:06.256 asserts 152 152 152 0 n/a 00:22:06.256 00:22:06.256 Elapsed time = 1.302 seconds 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.514 rmmod nvme_tcp 00:22:06.514 rmmod nvme_fabrics 00:22:06.514 rmmod nvme_keyring 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4126662 ']' 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4126662 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4126662 ']' 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4126662 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.514 08:18:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4126662 00:22:06.514 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:06.514 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:06.514 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4126662' 00:22:06.514 killing process with pid 4126662 00:22:06.514 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4126662 00:22:06.514 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4126662 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.772 08:18:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.677 08:18:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.677 00:22:08.677 real 0m6.231s 00:22:08.677 user 0m10.185s 00:22:08.677 sys 0m2.016s 00:22:08.677 08:18:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.677 08:18:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:08.677 ************************************ 00:22:08.677 END TEST nvmf_bdevio 00:22:08.677 ************************************ 00:22:08.935 08:18:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.935 08:18:18 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:08.935 08:18:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.935 08:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.935 08:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.935 ************************************ 00:22:08.935 START TEST nvmf_auth_target 00:22:08.935 ************************************ 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:08.935 * Looking for test storage... 00:22:08.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.935 08:18:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.936 08:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:10.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:10.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:10.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:10.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.836 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.837 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:22:11.095 00:22:11.095 --- 10.0.0.2 ping statistics --- 00:22:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.095 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:11.095 00:22:11.095 --- 10.0.0.1 ping statistics --- 00:22:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.095 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4128769 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4128769 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4128769 ']' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.095 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4128894 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d4418cb557feab1d1dd2e2f9cc208792383c4c9115015080 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KaO 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d4418cb557feab1d1dd2e2f9cc208792383c4c9115015080 0 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d4418cb557feab1d1dd2e2f9cc208792383c4c9115015080 0 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d4418cb557feab1d1dd2e2f9cc208792383c4c9115015080 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KaO 00:22:11.353 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KaO 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.KaO 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc29fc3a2248952b0086ed1cf76227ec80547b77035e43976eb4aa0b03953ae5 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5mS 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc29fc3a2248952b0086ed1cf76227ec80547b77035e43976eb4aa0b03953ae5 3 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc29fc3a2248952b0086ed1cf76227ec80547b77035e43976eb4aa0b03953ae5 3 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc29fc3a2248952b0086ed1cf76227ec80547b77035e43976eb4aa0b03953ae5 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5mS 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5mS 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.5mS 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53e4b50f4320eb871d2c91221c75749d 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.est 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53e4b50f4320eb871d2c91221c75749d 1 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53e4b50f4320eb871d2c91221c75749d 1 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53e4b50f4320eb871d2c91221c75749d 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:11.354 08:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.est 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.est 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.est 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.612 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4f84aeac133f6fc937146634104709535aaa969dac376dd 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1jz 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4f84aeac133f6fc937146634104709535aaa969dac376dd 2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4f84aeac133f6fc937146634104709535aaa969dac376dd 2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4f84aeac133f6fc937146634104709535aaa969dac376dd 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1jz 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1jz 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.1jz 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa3196835d4d92185ef37bcd9a7cc71aec5c52f528f2211e 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FoQ 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa3196835d4d92185ef37bcd9a7cc71aec5c52f528f2211e 2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa3196835d4d92185ef37bcd9a7cc71aec5c52f528f2211e 2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa3196835d4d92185ef37bcd9a7cc71aec5c52f528f2211e 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FoQ 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FoQ 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.FoQ 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d49a76138e6bdb0abcafad94574aa4c4 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.m71 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d49a76138e6bdb0abcafad94574aa4c4 1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d49a76138e6bdb0abcafad94574aa4c4 1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d49a76138e6bdb0abcafad94574aa4c4 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.m71 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.m71 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.m71 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=15165b8d271536351c93186a394e26453b81e7cbc786ed2e5e413dd43840335b 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i4Y 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 15165b8d271536351c93186a394e26453b81e7cbc786ed2e5e413dd43840335b 3 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 15165b8d271536351c93186a394e26453b81e7cbc786ed2e5e413dd43840335b 3 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=15165b8d271536351c93186a394e26453b81e7cbc786ed2e5e413dd43840335b 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i4Y 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i4Y 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.i4Y 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4128769 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4128769 ']' 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.613 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4128894 /var/tmp/host.sock 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4128894 ']' 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:11.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.898 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KaO 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KaO 00:22:12.155 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KaO 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.5mS ]] 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5mS 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5mS 00:22:12.411 08:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5mS 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.est 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.est 00:22:12.668 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.est 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.1jz ]] 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1jz 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1jz 00:22:12.926 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1jz 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FoQ 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FoQ 00:22:13.184 08:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FoQ 00:22:13.441 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.m71 ]] 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m71 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m71 00:22:13.442 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m71 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.i4Y 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.i4Y 00:22:13.699 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.i4Y 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:13.956 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.213 08:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.471 00:22:14.471 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.471 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.471 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.728 { 00:22:14.728 "cntlid": 1, 00:22:14.728 "qid": 0, 00:22:14.728 "state": "enabled", 00:22:14.728 "thread": "nvmf_tgt_poll_group_000", 00:22:14.728 "listen_address": { 00:22:14.728 "trtype": "TCP", 00:22:14.728 "adrfam": "IPv4", 00:22:14.728 "traddr": "10.0.0.2", 00:22:14.728 "trsvcid": "4420" 00:22:14.728 }, 00:22:14.728 "peer_address": { 00:22:14.728 "trtype": "TCP", 00:22:14.728 "adrfam": "IPv4", 00:22:14.728 "traddr": "10.0.0.1", 00:22:14.728 "trsvcid": "36414" 00:22:14.728 }, 00:22:14.728 "auth": { 00:22:14.728 "state": "completed", 00:22:14.728 "digest": "sha256", 00:22:14.728 "dhgroup": "null" 00:22:14.728 } 00:22:14.728 } 00:22:14.728 ]' 00:22:14.728 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.985 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.243 08:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.214 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.471 08:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.729 00:22:16.729 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.729 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.729 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.986 { 00:22:16.986 "cntlid": 3, 00:22:16.986 "qid": 0, 00:22:16.986 "state": "enabled", 00:22:16.986 "thread": "nvmf_tgt_poll_group_000", 00:22:16.986 "listen_address": { 00:22:16.986 "trtype": "TCP", 00:22:16.986 "adrfam": "IPv4", 00:22:16.986 "traddr": "10.0.0.2", 00:22:16.986 "trsvcid": "4420" 00:22:16.986 }, 00:22:16.986 "peer_address": { 00:22:16.986 "trtype": "TCP", 00:22:16.986 "adrfam": "IPv4", 00:22:16.986 "traddr": "10.0.0.1", 00:22:16.986 "trsvcid": "36422" 00:22:16.986 }, 00:22:16.986 "auth": { 00:22:16.986 "state": "completed", 00:22:16.986 "digest": "sha256", 00:22:16.986 "dhgroup": "null" 00:22:16.986 } 00:22:16.986 } 00:22:16.986 ]' 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.986 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.245 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.245 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.245 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.504 08:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.446 08:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.703 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.960 00:22:18.960 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.960 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.960 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.217 { 00:22:19.217 "cntlid": 5, 00:22:19.217 "qid": 0, 00:22:19.217 "state": "enabled", 00:22:19.217 "thread": "nvmf_tgt_poll_group_000", 00:22:19.217 "listen_address": { 00:22:19.217 "trtype": "TCP", 00:22:19.217 "adrfam": "IPv4", 00:22:19.217 "traddr": "10.0.0.2", 00:22:19.217 "trsvcid": "4420" 00:22:19.217 }, 00:22:19.217 "peer_address": { 00:22:19.217 "trtype": "TCP", 00:22:19.217 "adrfam": "IPv4", 00:22:19.217 "traddr": "10.0.0.1", 00:22:19.217 "trsvcid": "34468" 00:22:19.217 }, 00:22:19.217 "auth": { 00:22:19.217 "state": "completed", 00:22:19.217 "digest": "sha256", 00:22:19.217 "dhgroup": "null" 00:22:19.217 } 00:22:19.217 } 00:22:19.217 ]' 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.217 08:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.474 08:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.405 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.973 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.232 00:22:21.232 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.232 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.232 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.491 { 00:22:21.491 "cntlid": 7, 00:22:21.491 "qid": 0, 00:22:21.491 "state": "enabled", 00:22:21.491 "thread": "nvmf_tgt_poll_group_000", 00:22:21.491 "listen_address": { 00:22:21.491 "trtype": "TCP", 00:22:21.491 "adrfam": "IPv4", 00:22:21.491 "traddr": "10.0.0.2", 00:22:21.491 "trsvcid": "4420" 00:22:21.491 }, 00:22:21.491 "peer_address": { 00:22:21.491 "trtype": "TCP", 00:22:21.491 "adrfam": "IPv4", 00:22:21.491 "traddr": "10.0.0.1", 00:22:21.491 "trsvcid": "34508" 00:22:21.491 }, 00:22:21.491 "auth": { 00:22:21.491 "state": "completed", 00:22:21.491 "digest": "sha256", 00:22:21.491 "dhgroup": "null" 00:22:21.491 } 00:22:21.491 } 00:22:21.491 ]' 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:21.491 08:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.491 08:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.491 08:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.491 08:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.748 08:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:22:22.683 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.684 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.941 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.198 00:22:23.198 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.198 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.198 08:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.456 { 00:22:23.456 "cntlid": 9, 00:22:23.456 "qid": 0, 00:22:23.456 "state": "enabled", 00:22:23.456 "thread": "nvmf_tgt_poll_group_000", 00:22:23.456 "listen_address": { 00:22:23.456 "trtype": "TCP", 00:22:23.456 "adrfam": "IPv4", 00:22:23.456 "traddr": "10.0.0.2", 00:22:23.456 "trsvcid": "4420" 00:22:23.456 }, 00:22:23.456 "peer_address": { 00:22:23.456 "trtype": "TCP", 00:22:23.456 "adrfam": "IPv4", 00:22:23.456 "traddr": "10.0.0.1", 00:22:23.456 "trsvcid": "34528" 00:22:23.456 }, 00:22:23.456 "auth": { 00:22:23.456 "state": "completed", 00:22:23.456 "digest": "sha256", 00:22:23.456 "dhgroup": "ffdhe2048" 00:22:23.456 } 00:22:23.456 } 00:22:23.456 ]' 00:22:23.456 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.714 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.971 08:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:24.917 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.175 08:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.742 00:22:25.742 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.742 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.742 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.000 { 00:22:26.000 "cntlid": 11, 00:22:26.000 "qid": 0, 00:22:26.000 "state": "enabled", 00:22:26.000 "thread": "nvmf_tgt_poll_group_000", 00:22:26.000 "listen_address": { 00:22:26.000 "trtype": "TCP", 00:22:26.000 "adrfam": "IPv4", 00:22:26.000 "traddr": "10.0.0.2", 00:22:26.000 "trsvcid": "4420" 00:22:26.000 }, 00:22:26.000 "peer_address": { 00:22:26.000 "trtype": "TCP", 00:22:26.000 "adrfam": "IPv4", 00:22:26.000 "traddr": "10.0.0.1", 00:22:26.000 "trsvcid": "34562" 00:22:26.000 }, 00:22:26.000 "auth": { 00:22:26.000 "state": "completed", 00:22:26.000 "digest": "sha256", 00:22:26.000 "dhgroup": "ffdhe2048" 00:22:26.000 } 00:22:26.000 } 00:22:26.000 ]' 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.000 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:26.001 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.001 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.001 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.001 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.260 08:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:27.195 08:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.470 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.728 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.729 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.729 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.987 00:22:27.987 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.987 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.987 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.245 { 00:22:28.245 "cntlid": 13, 00:22:28.245 "qid": 0, 00:22:28.245 "state": "enabled", 00:22:28.245 "thread": "nvmf_tgt_poll_group_000", 00:22:28.245 "listen_address": { 00:22:28.245 "trtype": "TCP", 00:22:28.245 "adrfam": "IPv4", 00:22:28.245 "traddr": "10.0.0.2", 00:22:28.245 "trsvcid": "4420" 00:22:28.245 }, 00:22:28.245 "peer_address": { 00:22:28.245 "trtype": "TCP", 00:22:28.245 "adrfam": "IPv4", 00:22:28.245 "traddr": "10.0.0.1", 00:22:28.245 "trsvcid": "38922" 00:22:28.245 }, 00:22:28.245 "auth": { 00:22:28.245 "state": "completed", 00:22:28.245 "digest": "sha256", 00:22:28.245 "dhgroup": "ffdhe2048" 00:22:28.245 } 00:22:28.245 } 00:22:28.245 ]' 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.245 08:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.503 08:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:22:29.477 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.735 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.993 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.251 00:22:30.251 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.251 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.251 08:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.509 { 00:22:30.509 "cntlid": 15, 00:22:30.509 "qid": 0, 00:22:30.509 "state": "enabled", 00:22:30.509 "thread": "nvmf_tgt_poll_group_000", 00:22:30.509 "listen_address": { 00:22:30.509 "trtype": "TCP", 00:22:30.509 "adrfam": "IPv4", 00:22:30.509 "traddr": "10.0.0.2", 00:22:30.509 "trsvcid": "4420" 00:22:30.509 }, 00:22:30.509 "peer_address": { 00:22:30.509 "trtype": "TCP", 00:22:30.509 "adrfam": "IPv4", 00:22:30.509 "traddr": "10.0.0.1", 00:22:30.509 "trsvcid": "38956" 00:22:30.509 }, 00:22:30.509 "auth": { 00:22:30.509 "state": "completed", 00:22:30.509 "digest": "sha256", 00:22:30.509 "dhgroup": "ffdhe2048" 00:22:30.509 } 00:22:30.509 } 00:22:30.509 ]' 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.509 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.767 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.767 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.767 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.767 08:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.702 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.266 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.267 08:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.524 00:22:32.524 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.524 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.524 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.782 { 00:22:32.782 "cntlid": 17, 00:22:32.782 "qid": 0, 00:22:32.782 "state": "enabled", 00:22:32.782 "thread": "nvmf_tgt_poll_group_000", 00:22:32.782 "listen_address": { 00:22:32.782 "trtype": "TCP", 00:22:32.782 "adrfam": "IPv4", 00:22:32.782 "traddr": "10.0.0.2", 00:22:32.782 "trsvcid": "4420" 00:22:32.782 }, 00:22:32.782 "peer_address": { 00:22:32.782 "trtype": "TCP", 00:22:32.782 "adrfam": "IPv4", 00:22:32.782 "traddr": "10.0.0.1", 00:22:32.782 "trsvcid": "38970" 00:22:32.782 }, 00:22:32.782 "auth": { 00:22:32.782 "state": "completed", 00:22:32.782 "digest": "sha256", 00:22:32.782 "dhgroup": "ffdhe3072" 00:22:32.782 } 00:22:32.782 } 00:22:32.782 ]' 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.782 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.040 08:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.971 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.534 08:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.792 00:22:34.792 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.792 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.792 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.049 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.049 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.049 08:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.050 { 00:22:35.050 "cntlid": 19, 00:22:35.050 "qid": 0, 00:22:35.050 "state": "enabled", 00:22:35.050 "thread": "nvmf_tgt_poll_group_000", 00:22:35.050 "listen_address": { 00:22:35.050 "trtype": "TCP", 00:22:35.050 "adrfam": "IPv4", 00:22:35.050 "traddr": "10.0.0.2", 00:22:35.050 "trsvcid": "4420" 00:22:35.050 }, 00:22:35.050 "peer_address": { 00:22:35.050 "trtype": "TCP", 00:22:35.050 "adrfam": "IPv4", 00:22:35.050 "traddr": "10.0.0.1", 00:22:35.050 "trsvcid": "39006" 00:22:35.050 }, 00:22:35.050 "auth": { 00:22:35.050 "state": "completed", 00:22:35.050 "digest": "sha256", 00:22:35.050 "dhgroup": "ffdhe3072" 00:22:35.050 } 00:22:35.050 } 00:22:35.050 ]' 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.050 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.308 08:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:36.244 08:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.502 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.071 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.071 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.071 { 00:22:37.071 "cntlid": 21, 00:22:37.071 "qid": 0, 00:22:37.071 "state": "enabled", 00:22:37.071 "thread": "nvmf_tgt_poll_group_000", 00:22:37.071 "listen_address": { 00:22:37.071 "trtype": "TCP", 00:22:37.071 "adrfam": "IPv4", 00:22:37.071 "traddr": "10.0.0.2", 00:22:37.071 "trsvcid": "4420" 00:22:37.071 }, 00:22:37.071 "peer_address": { 00:22:37.071 "trtype": "TCP", 00:22:37.071 "adrfam": "IPv4", 00:22:37.071 "traddr": "10.0.0.1", 00:22:37.071 "trsvcid": "39036" 00:22:37.071 }, 00:22:37.071 "auth": { 00:22:37.071 "state": "completed", 00:22:37.071 "digest": "sha256", 00:22:37.071 "dhgroup": "ffdhe3072" 00:22:37.071 } 00:22:37.071 } 00:22:37.071 ]' 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.329 08:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.587 08:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:22:38.519 08:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:38.519 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.777 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.034 00:22:39.034 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.034 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.034 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.292 { 00:22:39.292 "cntlid": 23, 00:22:39.292 "qid": 0, 00:22:39.292 "state": "enabled", 00:22:39.292 "thread": "nvmf_tgt_poll_group_000", 00:22:39.292 "listen_address": { 00:22:39.292 "trtype": "TCP", 00:22:39.292 "adrfam": "IPv4", 00:22:39.292 "traddr": "10.0.0.2", 00:22:39.292 "trsvcid": "4420" 00:22:39.292 }, 00:22:39.292 "peer_address": { 00:22:39.292 "trtype": "TCP", 00:22:39.292 "adrfam": "IPv4", 00:22:39.292 "traddr": "10.0.0.1", 00:22:39.292 "trsvcid": "38160" 00:22:39.292 }, 00:22:39.292 "auth": { 00:22:39.292 "state": "completed", 00:22:39.292 "digest": "sha256", 00:22:39.292 "dhgroup": "ffdhe3072" 00:22:39.292 } 00:22:39.292 } 00:22:39.292 ]' 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.292 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.549 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:39.549 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.549 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.549 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.549 08:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.808 08:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.741 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.998 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:22:40.998 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.998 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:40.998 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.998 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.999 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.256 00:22:41.256 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.256 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.256 08:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.512 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.512 { 00:22:41.512 "cntlid": 25, 00:22:41.512 "qid": 0, 00:22:41.512 "state": "enabled", 00:22:41.512 "thread": "nvmf_tgt_poll_group_000", 00:22:41.513 "listen_address": { 00:22:41.513 "trtype": "TCP", 00:22:41.513 "adrfam": "IPv4", 00:22:41.513 "traddr": "10.0.0.2", 00:22:41.513 "trsvcid": "4420" 00:22:41.513 }, 00:22:41.513 "peer_address": { 00:22:41.513 "trtype": "TCP", 00:22:41.513 "adrfam": "IPv4", 00:22:41.513 "traddr": "10.0.0.1", 00:22:41.513 "trsvcid": "38192" 00:22:41.513 }, 00:22:41.513 "auth": { 00:22:41.513 "state": "completed", 00:22:41.513 "digest": "sha256", 00:22:41.513 "dhgroup": "ffdhe4096" 00:22:41.513 } 00:22:41.513 } 00:22:41.513 ]' 00:22:41.513 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.513 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.513 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.770 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.770 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.770 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.770 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.770 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.027 08:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:42.995 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.253 08:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.509 00:22:43.509 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.509 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.509 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.767 { 00:22:43.767 "cntlid": 27, 00:22:43.767 "qid": 0, 00:22:43.767 "state": "enabled", 00:22:43.767 "thread": "nvmf_tgt_poll_group_000", 00:22:43.767 "listen_address": { 00:22:43.767 "trtype": "TCP", 00:22:43.767 "adrfam": "IPv4", 00:22:43.767 "traddr": "10.0.0.2", 00:22:43.767 "trsvcid": "4420" 00:22:43.767 }, 00:22:43.767 "peer_address": { 00:22:43.767 "trtype": "TCP", 00:22:43.767 "adrfam": "IPv4", 00:22:43.767 "traddr": "10.0.0.1", 00:22:43.767 "trsvcid": "38228" 00:22:43.767 }, 00:22:43.767 "auth": { 00:22:43.767 "state": "completed", 00:22:43.767 "digest": "sha256", 00:22:43.767 "dhgroup": "ffdhe4096" 00:22:43.767 } 00:22:43.767 } 00:22:43.767 ]' 00:22:43.767 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.024 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.282 08:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.216 08:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.217 08:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:45.217 08:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.482 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.046 00:22:46.046 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.046 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.046 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.303 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.303 { 00:22:46.303 "cntlid": 29, 00:22:46.303 "qid": 0, 00:22:46.303 "state": "enabled", 00:22:46.303 "thread": "nvmf_tgt_poll_group_000", 00:22:46.303 "listen_address": { 00:22:46.303 "trtype": "TCP", 00:22:46.303 "adrfam": "IPv4", 00:22:46.304 "traddr": "10.0.0.2", 00:22:46.304 "trsvcid": "4420" 00:22:46.304 }, 00:22:46.304 "peer_address": { 00:22:46.304 "trtype": "TCP", 00:22:46.304 "adrfam": "IPv4", 00:22:46.304 "traddr": "10.0.0.1", 00:22:46.304 "trsvcid": "38256" 00:22:46.304 }, 00:22:46.304 "auth": { 00:22:46.304 "state": "completed", 00:22:46.304 "digest": "sha256", 00:22:46.304 "dhgroup": "ffdhe4096" 00:22:46.304 } 00:22:46.304 } 00:22:46.304 ]' 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.304 08:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.562 08:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:47.495 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.753 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.318 00:22:48.318 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.318 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.318 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.574 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.574 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.575 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.575 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.575 08:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.575 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.575 { 00:22:48.575 "cntlid": 31, 00:22:48.575 "qid": 0, 00:22:48.575 "state": "enabled", 00:22:48.575 "thread": "nvmf_tgt_poll_group_000", 00:22:48.575 "listen_address": { 00:22:48.575 "trtype": "TCP", 00:22:48.575 "adrfam": "IPv4", 00:22:48.575 "traddr": "10.0.0.2", 00:22:48.575 "trsvcid": "4420" 00:22:48.575 }, 00:22:48.575 "peer_address": { 00:22:48.575 "trtype": "TCP", 00:22:48.575 "adrfam": "IPv4", 00:22:48.575 "traddr": "10.0.0.1", 00:22:48.575 "trsvcid": "34866" 00:22:48.575 }, 00:22:48.575 "auth": { 00:22:48.575 "state": "completed", 00:22:48.575 "digest": "sha256", 00:22:48.575 "dhgroup": "ffdhe4096" 00:22:48.575 } 00:22:48.575 } 00:22:48.575 ]' 00:22:48.575 08:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.575 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.830 08:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:49.764 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.020 08:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.583 00:22:50.583 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.583 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.583 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.840 { 00:22:50.840 "cntlid": 33, 00:22:50.840 "qid": 0, 00:22:50.840 "state": "enabled", 00:22:50.840 "thread": "nvmf_tgt_poll_group_000", 00:22:50.840 "listen_address": { 00:22:50.840 "trtype": "TCP", 00:22:50.840 "adrfam": "IPv4", 00:22:50.840 "traddr": "10.0.0.2", 00:22:50.840 "trsvcid": "4420" 00:22:50.840 }, 00:22:50.840 "peer_address": { 00:22:50.840 "trtype": "TCP", 00:22:50.840 "adrfam": "IPv4", 00:22:50.840 "traddr": "10.0.0.1", 00:22:50.840 "trsvcid": "34880" 00:22:50.840 }, 00:22:50.840 "auth": { 00:22:50.840 "state": "completed", 00:22:50.840 "digest": "sha256", 00:22:50.840 "dhgroup": "ffdhe6144" 00:22:50.840 } 00:22:50.840 } 00:22:50.840 ]' 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.840 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.098 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.098 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.098 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.098 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.098 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.355 08:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.286 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.287 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.287 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.544 08:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.123 00:22:53.123 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.123 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.123 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.380 { 00:22:53.380 "cntlid": 35, 00:22:53.380 "qid": 0, 00:22:53.380 "state": "enabled", 00:22:53.380 "thread": "nvmf_tgt_poll_group_000", 00:22:53.380 "listen_address": { 00:22:53.380 "trtype": "TCP", 00:22:53.380 "adrfam": "IPv4", 00:22:53.380 "traddr": "10.0.0.2", 00:22:53.380 "trsvcid": "4420" 00:22:53.380 }, 00:22:53.380 "peer_address": { 00:22:53.380 "trtype": "TCP", 00:22:53.380 "adrfam": "IPv4", 00:22:53.380 "traddr": "10.0.0.1", 00:22:53.380 "trsvcid": "34908" 00:22:53.380 }, 00:22:53.380 "auth": { 00:22:53.380 "state": "completed", 00:22:53.380 "digest": "sha256", 00:22:53.380 "dhgroup": "ffdhe6144" 00:22:53.380 } 00:22:53.380 } 00:22:53.380 ]' 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.380 08:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.638 08:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:54.572 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.829 08:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.399 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.657 { 00:22:55.657 "cntlid": 37, 00:22:55.657 "qid": 0, 00:22:55.657 "state": "enabled", 00:22:55.657 "thread": "nvmf_tgt_poll_group_000", 00:22:55.657 "listen_address": { 00:22:55.657 "trtype": "TCP", 00:22:55.657 "adrfam": "IPv4", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "trsvcid": "4420" 00:22:55.657 }, 00:22:55.657 "peer_address": { 00:22:55.657 "trtype": "TCP", 00:22:55.657 "adrfam": "IPv4", 00:22:55.657 "traddr": "10.0.0.1", 00:22:55.657 "trsvcid": "34926" 00:22:55.657 }, 00:22:55.657 "auth": { 00:22:55.657 "state": "completed", 00:22:55.657 "digest": "sha256", 00:22:55.657 "dhgroup": "ffdhe6144" 00:22:55.657 } 00:22:55.657 } 00:22:55.657 ]' 00:22:55.657 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.915 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.173 08:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:57.135 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.393 08:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.959 00:22:57.959 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.959 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.959 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.217 { 00:22:58.217 "cntlid": 39, 00:22:58.217 "qid": 0, 00:22:58.217 "state": "enabled", 00:22:58.217 "thread": "nvmf_tgt_poll_group_000", 00:22:58.217 "listen_address": { 00:22:58.217 "trtype": "TCP", 00:22:58.217 "adrfam": "IPv4", 00:22:58.217 "traddr": "10.0.0.2", 00:22:58.217 "trsvcid": "4420" 00:22:58.217 }, 00:22:58.217 "peer_address": { 00:22:58.217 "trtype": "TCP", 00:22:58.217 "adrfam": "IPv4", 00:22:58.217 "traddr": "10.0.0.1", 00:22:58.217 "trsvcid": "51240" 00:22:58.217 }, 00:22:58.217 "auth": { 00:22:58.217 "state": "completed", 00:22:58.217 "digest": "sha256", 00:22:58.217 "dhgroup": "ffdhe6144" 00:22:58.217 } 00:22:58.217 } 00:22:58.217 ]' 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.217 08:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.476 08:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:59.412 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.670 08:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.603 00:23:00.603 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.603 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.603 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.170 { 00:23:01.170 "cntlid": 41, 00:23:01.170 "qid": 0, 00:23:01.170 "state": "enabled", 00:23:01.170 "thread": "nvmf_tgt_poll_group_000", 00:23:01.170 "listen_address": { 00:23:01.170 "trtype": "TCP", 00:23:01.170 "adrfam": "IPv4", 00:23:01.170 "traddr": "10.0.0.2", 00:23:01.170 "trsvcid": "4420" 00:23:01.170 }, 00:23:01.170 "peer_address": { 00:23:01.170 "trtype": "TCP", 00:23:01.170 "adrfam": "IPv4", 00:23:01.170 "traddr": "10.0.0.1", 00:23:01.170 "trsvcid": "51250" 00:23:01.170 }, 00:23:01.170 "auth": { 00:23:01.170 "state": "completed", 00:23:01.170 "digest": "sha256", 00:23:01.170 "dhgroup": "ffdhe8192" 00:23:01.170 } 00:23:01.170 } 00:23:01.170 ]' 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.170 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.427 08:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.358 08:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.615 08:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.549 00:23:03.549 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.549 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.549 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.806 { 00:23:03.806 "cntlid": 43, 00:23:03.806 "qid": 0, 00:23:03.806 "state": "enabled", 00:23:03.806 "thread": "nvmf_tgt_poll_group_000", 00:23:03.806 "listen_address": { 00:23:03.806 "trtype": "TCP", 00:23:03.806 "adrfam": "IPv4", 00:23:03.806 "traddr": "10.0.0.2", 00:23:03.806 "trsvcid": "4420" 00:23:03.806 }, 00:23:03.806 "peer_address": { 00:23:03.806 "trtype": "TCP", 00:23:03.806 "adrfam": "IPv4", 00:23:03.806 "traddr": "10.0.0.1", 00:23:03.806 "trsvcid": "51274" 00:23:03.806 }, 00:23:03.806 "auth": { 00:23:03.806 "state": "completed", 00:23:03.806 "digest": "sha256", 00:23:03.806 "dhgroup": "ffdhe8192" 00:23:03.806 } 00:23:03.806 } 00:23:03.806 ]' 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.806 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.062 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.062 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.062 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.319 08:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:05.258 08:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.514 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.446 00:23:06.446 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.446 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.446 08:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.703 { 00:23:06.703 "cntlid": 45, 00:23:06.703 "qid": 0, 00:23:06.703 "state": "enabled", 00:23:06.703 "thread": "nvmf_tgt_poll_group_000", 00:23:06.703 "listen_address": { 00:23:06.703 "trtype": "TCP", 00:23:06.703 "adrfam": "IPv4", 00:23:06.703 "traddr": "10.0.0.2", 00:23:06.703 "trsvcid": "4420" 00:23:06.703 }, 00:23:06.703 "peer_address": { 00:23:06.703 "trtype": "TCP", 00:23:06.703 "adrfam": "IPv4", 00:23:06.703 "traddr": "10.0.0.1", 00:23:06.703 "trsvcid": "51298" 00:23:06.703 }, 00:23:06.703 "auth": { 00:23:06.703 "state": "completed", 00:23:06.703 "digest": "sha256", 00:23:06.703 "dhgroup": "ffdhe8192" 00:23:06.703 } 00:23:06.703 } 00:23:06.703 ]' 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.703 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.960 08:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:08.332 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.332 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.333 08:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:09.265 00:23:09.265 08:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.265 08:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.265 08:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.522 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.522 { 00:23:09.522 "cntlid": 47, 00:23:09.522 "qid": 0, 00:23:09.522 "state": "enabled", 00:23:09.522 "thread": "nvmf_tgt_poll_group_000", 00:23:09.522 "listen_address": { 00:23:09.522 "trtype": "TCP", 00:23:09.522 "adrfam": "IPv4", 00:23:09.522 "traddr": "10.0.0.2", 00:23:09.522 "trsvcid": "4420" 00:23:09.522 }, 00:23:09.523 "peer_address": { 00:23:09.523 "trtype": "TCP", 00:23:09.523 "adrfam": "IPv4", 00:23:09.523 "traddr": "10.0.0.1", 00:23:09.523 "trsvcid": "54120" 00:23:09.523 }, 00:23:09.523 "auth": { 00:23:09.523 "state": "completed", 00:23:09.523 "digest": "sha256", 00:23:09.523 "dhgroup": "ffdhe8192" 00:23:09.523 } 00:23:09.523 } 00:23:09.523 ]' 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.523 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.090 08:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.026 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.316 08:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.574 00:23:11.574 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.574 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.574 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.832 { 00:23:11.832 "cntlid": 49, 00:23:11.832 "qid": 0, 00:23:11.832 "state": "enabled", 00:23:11.832 "thread": "nvmf_tgt_poll_group_000", 00:23:11.832 "listen_address": { 00:23:11.832 "trtype": "TCP", 00:23:11.832 "adrfam": "IPv4", 00:23:11.832 "traddr": "10.0.0.2", 00:23:11.832 "trsvcid": "4420" 00:23:11.832 }, 00:23:11.832 "peer_address": { 00:23:11.832 "trtype": "TCP", 00:23:11.832 "adrfam": "IPv4", 00:23:11.832 "traddr": "10.0.0.1", 00:23:11.832 "trsvcid": "54150" 00:23:11.832 }, 00:23:11.832 "auth": { 00:23:11.832 "state": "completed", 00:23:11.832 "digest": "sha384", 00:23:11.832 "dhgroup": "null" 00:23:11.832 } 00:23:11.832 } 00:23:11.832 ]' 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.832 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.092 08:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.497 08:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.754 00:23:13.754 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.754 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.754 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.010 { 00:23:14.010 "cntlid": 51, 00:23:14.010 "qid": 0, 00:23:14.010 "state": "enabled", 00:23:14.010 "thread": "nvmf_tgt_poll_group_000", 00:23:14.010 "listen_address": { 00:23:14.010 "trtype": "TCP", 00:23:14.010 "adrfam": "IPv4", 00:23:14.010 "traddr": "10.0.0.2", 00:23:14.010 "trsvcid": "4420" 00:23:14.010 }, 00:23:14.010 "peer_address": { 00:23:14.010 "trtype": "TCP", 00:23:14.010 "adrfam": "IPv4", 00:23:14.010 "traddr": "10.0.0.1", 00:23:14.010 "trsvcid": "54170" 00:23:14.010 }, 00:23:14.010 "auth": { 00:23:14.010 "state": "completed", 00:23:14.010 "digest": "sha384", 00:23:14.010 "dhgroup": "null" 00:23:14.010 } 00:23:14.010 } 00:23:14.010 ]' 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:14.010 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.267 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.267 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.267 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.525 08:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.460 08:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.717 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.975 00:23:15.975 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.975 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.975 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:16.232 { 00:23:16.232 "cntlid": 53, 00:23:16.232 "qid": 0, 00:23:16.232 "state": "enabled", 00:23:16.232 "thread": "nvmf_tgt_poll_group_000", 00:23:16.232 "listen_address": { 00:23:16.232 "trtype": "TCP", 00:23:16.232 "adrfam": "IPv4", 00:23:16.232 "traddr": "10.0.0.2", 00:23:16.232 "trsvcid": "4420" 00:23:16.232 }, 00:23:16.232 "peer_address": { 00:23:16.232 "trtype": "TCP", 00:23:16.232 "adrfam": "IPv4", 00:23:16.232 "traddr": "10.0.0.1", 00:23:16.232 "trsvcid": "54206" 00:23:16.232 }, 00:23:16.232 "auth": { 00:23:16.232 "state": "completed", 00:23:16.232 "digest": "sha384", 00:23:16.232 "dhgroup": "null" 00:23:16.232 } 00:23:16.232 } 00:23:16.232 ]' 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.232 08:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.491 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:17.427 08:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.684 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.942 00:23:17.942 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.942 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.942 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.199 { 00:23:18.199 "cntlid": 55, 00:23:18.199 "qid": 0, 00:23:18.199 "state": "enabled", 00:23:18.199 "thread": "nvmf_tgt_poll_group_000", 00:23:18.199 "listen_address": { 00:23:18.199 "trtype": "TCP", 00:23:18.199 "adrfam": "IPv4", 00:23:18.199 "traddr": "10.0.0.2", 00:23:18.199 "trsvcid": "4420" 00:23:18.199 }, 00:23:18.199 "peer_address": { 00:23:18.199 "trtype": "TCP", 00:23:18.199 "adrfam": "IPv4", 00:23:18.199 "traddr": "10.0.0.1", 00:23:18.199 "trsvcid": "57166" 00:23:18.199 }, 00:23:18.199 "auth": { 00:23:18.199 "state": "completed", 00:23:18.199 "digest": "sha384", 00:23:18.199 "dhgroup": "null" 00:23:18.199 } 00:23:18.199 } 00:23:18.199 ]' 00:23:18.199 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.457 08:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.715 08:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.649 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.907 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.165 00:23:20.165 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.165 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.165 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.422 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.423 { 00:23:20.423 "cntlid": 57, 00:23:20.423 "qid": 0, 00:23:20.423 "state": "enabled", 00:23:20.423 "thread": "nvmf_tgt_poll_group_000", 00:23:20.423 "listen_address": { 00:23:20.423 "trtype": "TCP", 00:23:20.423 "adrfam": "IPv4", 00:23:20.423 "traddr": "10.0.0.2", 00:23:20.423 "trsvcid": "4420" 00:23:20.423 }, 00:23:20.423 "peer_address": { 00:23:20.423 "trtype": "TCP", 00:23:20.423 "adrfam": "IPv4", 00:23:20.423 "traddr": "10.0.0.1", 00:23:20.423 "trsvcid": "57190" 00:23:20.423 }, 00:23:20.423 "auth": { 00:23:20.423 "state": "completed", 00:23:20.423 "digest": "sha384", 00:23:20.423 "dhgroup": "ffdhe2048" 00:23:20.423 } 00:23:20.423 } 00:23:20.423 ]' 00:23:20.423 08:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.423 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.423 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.680 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:20.680 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.680 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.680 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.680 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.936 08:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.871 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.128 08:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.386 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.645 08:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.903 { 00:23:22.903 "cntlid": 59, 00:23:22.903 "qid": 0, 00:23:22.903 "state": "enabled", 00:23:22.903 "thread": "nvmf_tgt_poll_group_000", 00:23:22.903 "listen_address": { 00:23:22.903 "trtype": "TCP", 00:23:22.903 "adrfam": "IPv4", 00:23:22.903 "traddr": "10.0.0.2", 00:23:22.903 "trsvcid": "4420" 00:23:22.903 }, 00:23:22.903 "peer_address": { 00:23:22.903 "trtype": "TCP", 00:23:22.903 "adrfam": "IPv4", 00:23:22.903 "traddr": "10.0.0.1", 00:23:22.903 "trsvcid": "57216" 00:23:22.903 }, 00:23:22.903 "auth": { 00:23:22.903 "state": "completed", 00:23:22.903 "digest": "sha384", 00:23:22.903 "dhgroup": "ffdhe2048" 00:23:22.903 } 00:23:22.903 } 00:23:22.903 ]' 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.903 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.160 08:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.093 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.350 08:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.916 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.916 { 00:23:24.916 "cntlid": 61, 00:23:24.916 "qid": 0, 00:23:24.916 "state": "enabled", 00:23:24.916 "thread": "nvmf_tgt_poll_group_000", 00:23:24.916 "listen_address": { 00:23:24.916 "trtype": "TCP", 00:23:24.916 "adrfam": "IPv4", 00:23:24.916 "traddr": "10.0.0.2", 00:23:24.916 "trsvcid": "4420" 00:23:24.916 }, 00:23:24.916 "peer_address": { 00:23:24.916 "trtype": "TCP", 00:23:24.916 "adrfam": "IPv4", 00:23:24.916 "traddr": "10.0.0.1", 00:23:24.916 "trsvcid": "57238" 00:23:24.916 }, 00:23:24.916 "auth": { 00:23:24.916 "state": "completed", 00:23:24.916 "digest": "sha384", 00:23:24.916 "dhgroup": "ffdhe2048" 00:23:24.916 } 00:23:24.916 } 00:23:24.916 ]' 00:23:24.916 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.201 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.457 08:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.395 08:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.653 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.910 00:23:26.910 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.910 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.910 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:27.167 { 00:23:27.167 "cntlid": 63, 00:23:27.167 "qid": 0, 00:23:27.167 "state": "enabled", 00:23:27.167 "thread": "nvmf_tgt_poll_group_000", 00:23:27.167 "listen_address": { 00:23:27.167 "trtype": "TCP", 00:23:27.167 "adrfam": "IPv4", 00:23:27.167 "traddr": "10.0.0.2", 00:23:27.167 "trsvcid": "4420" 00:23:27.167 }, 00:23:27.167 "peer_address": { 00:23:27.167 "trtype": "TCP", 00:23:27.167 "adrfam": "IPv4", 00:23:27.167 "traddr": "10.0.0.1", 00:23:27.167 "trsvcid": "57264" 00:23:27.167 }, 00:23:27.167 "auth": { 00:23:27.167 "state": "completed", 00:23:27.167 "digest": "sha384", 00:23:27.167 "dhgroup": "ffdhe2048" 00:23:27.167 } 00:23:27.167 } 00:23:27.167 ]' 00:23:27.167 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.425 08:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.683 08:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.623 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.881 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.138 00:23:29.138 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:29.138 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:29.138 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:29.396 { 00:23:29.396 "cntlid": 65, 00:23:29.396 "qid": 0, 00:23:29.396 "state": "enabled", 00:23:29.396 "thread": "nvmf_tgt_poll_group_000", 00:23:29.396 "listen_address": { 00:23:29.396 "trtype": "TCP", 00:23:29.396 "adrfam": "IPv4", 00:23:29.396 "traddr": "10.0.0.2", 00:23:29.396 "trsvcid": "4420" 00:23:29.396 }, 00:23:29.396 "peer_address": { 00:23:29.396 "trtype": "TCP", 00:23:29.396 "adrfam": "IPv4", 00:23:29.396 "traddr": "10.0.0.1", 00:23:29.396 "trsvcid": "47816" 00:23:29.396 }, 00:23:29.396 "auth": { 00:23:29.396 "state": "completed", 00:23:29.396 "digest": "sha384", 00:23:29.396 "dhgroup": "ffdhe3072" 00:23:29.396 } 00:23:29.396 } 00:23:29.396 ]' 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.396 08:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:29.396 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:29.396 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:29.656 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.656 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.656 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.914 08:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:30.863 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.120 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:31.120 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.120 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.121 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.378 00:23:31.378 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.378 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.378 08:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:31.635 { 00:23:31.635 "cntlid": 67, 00:23:31.635 "qid": 0, 00:23:31.635 "state": "enabled", 00:23:31.635 "thread": "nvmf_tgt_poll_group_000", 00:23:31.635 "listen_address": { 00:23:31.635 "trtype": "TCP", 00:23:31.635 "adrfam": "IPv4", 00:23:31.635 "traddr": "10.0.0.2", 00:23:31.635 "trsvcid": "4420" 00:23:31.635 }, 00:23:31.635 "peer_address": { 00:23:31.635 "trtype": "TCP", 00:23:31.635 "adrfam": "IPv4", 00:23:31.635 "traddr": "10.0.0.1", 00:23:31.635 "trsvcid": "47834" 00:23:31.635 }, 00:23:31.635 "auth": { 00:23:31.635 "state": "completed", 00:23:31.635 "digest": "sha384", 00:23:31.635 "dhgroup": "ffdhe3072" 00:23:31.635 } 00:23:31.635 } 00:23:31.635 ]' 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:31.635 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:31.892 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.892 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.892 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.892 08:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.832 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.091 08:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.656 00:23:33.656 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:33.656 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:33.656 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:33.912 { 00:23:33.912 "cntlid": 69, 00:23:33.912 "qid": 0, 00:23:33.912 "state": "enabled", 00:23:33.912 "thread": "nvmf_tgt_poll_group_000", 00:23:33.912 "listen_address": { 00:23:33.912 "trtype": "TCP", 00:23:33.912 "adrfam": "IPv4", 00:23:33.912 "traddr": "10.0.0.2", 00:23:33.912 "trsvcid": "4420" 00:23:33.912 }, 00:23:33.912 "peer_address": { 00:23:33.912 "trtype": "TCP", 00:23:33.912 "adrfam": "IPv4", 00:23:33.912 "traddr": "10.0.0.1", 00:23:33.912 "trsvcid": "47874" 00:23:33.912 }, 00:23:33.912 "auth": { 00:23:33.912 "state": "completed", 00:23:33.912 "digest": "sha384", 00:23:33.912 "dhgroup": "ffdhe3072" 00:23:33.912 } 00:23:33.912 } 00:23:33.912 ]' 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.912 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.169 08:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.100 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.358 08:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.922 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:35.922 { 00:23:35.922 "cntlid": 71, 00:23:35.922 "qid": 0, 00:23:35.922 "state": "enabled", 00:23:35.922 "thread": "nvmf_tgt_poll_group_000", 00:23:35.922 "listen_address": { 00:23:35.922 "trtype": "TCP", 00:23:35.922 "adrfam": "IPv4", 00:23:35.922 "traddr": "10.0.0.2", 00:23:35.922 "trsvcid": "4420" 00:23:35.922 }, 00:23:35.922 "peer_address": { 00:23:35.922 "trtype": "TCP", 00:23:35.922 "adrfam": "IPv4", 00:23:35.922 "traddr": "10.0.0.1", 00:23:35.922 "trsvcid": "47904" 00:23:35.922 }, 00:23:35.922 "auth": { 00:23:35.922 "state": "completed", 00:23:35.922 "digest": "sha384", 00:23:35.922 "dhgroup": "ffdhe3072" 00:23:35.922 } 00:23:35.922 } 00:23:35.922 ]' 00:23:35.922 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.179 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.435 08:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.371 08:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.629 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.887 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.156 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:38.156 { 00:23:38.156 "cntlid": 73, 00:23:38.156 "qid": 0, 00:23:38.156 "state": "enabled", 00:23:38.156 "thread": "nvmf_tgt_poll_group_000", 00:23:38.156 "listen_address": { 00:23:38.156 "trtype": "TCP", 00:23:38.156 "adrfam": "IPv4", 00:23:38.156 "traddr": "10.0.0.2", 00:23:38.156 "trsvcid": "4420" 00:23:38.156 }, 00:23:38.156 "peer_address": { 00:23:38.156 "trtype": "TCP", 00:23:38.156 "adrfam": "IPv4", 00:23:38.156 "traddr": "10.0.0.1", 00:23:38.156 "trsvcid": "59548" 00:23:38.156 }, 00:23:38.156 "auth": { 00:23:38.156 "state": "completed", 00:23:38.156 "digest": "sha384", 00:23:38.156 "dhgroup": "ffdhe4096" 00:23:38.156 } 00:23:38.156 } 00:23:38.156 ]' 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.413 08:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.670 08:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.649 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.906 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.163 00:23:40.164 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.164 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.164 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:40.421 { 00:23:40.421 "cntlid": 75, 00:23:40.421 "qid": 0, 00:23:40.421 "state": "enabled", 00:23:40.421 "thread": "nvmf_tgt_poll_group_000", 00:23:40.421 "listen_address": { 00:23:40.421 "trtype": "TCP", 00:23:40.421 "adrfam": "IPv4", 00:23:40.421 "traddr": "10.0.0.2", 00:23:40.421 "trsvcid": "4420" 00:23:40.421 }, 00:23:40.421 "peer_address": { 00:23:40.421 "trtype": "TCP", 00:23:40.421 "adrfam": "IPv4", 00:23:40.421 "traddr": "10.0.0.1", 00:23:40.421 "trsvcid": "59576" 00:23:40.421 }, 00:23:40.421 "auth": { 00:23:40.421 "state": "completed", 00:23:40.421 "digest": "sha384", 00:23:40.421 "dhgroup": "ffdhe4096" 00:23:40.421 } 00:23:40.421 } 00:23:40.421 ]' 00:23:40.421 08:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:40.421 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.421 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:40.421 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:40.421 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:40.679 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.679 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.679 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.937 08:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:41.867 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.125 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.383 00:23:42.383 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.383 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.383 08:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.641 { 00:23:42.641 "cntlid": 77, 00:23:42.641 "qid": 0, 00:23:42.641 "state": "enabled", 00:23:42.641 "thread": "nvmf_tgt_poll_group_000", 00:23:42.641 "listen_address": { 00:23:42.641 "trtype": "TCP", 00:23:42.641 "adrfam": "IPv4", 00:23:42.641 "traddr": "10.0.0.2", 00:23:42.641 "trsvcid": "4420" 00:23:42.641 }, 00:23:42.641 "peer_address": { 00:23:42.641 "trtype": "TCP", 00:23:42.641 "adrfam": "IPv4", 00:23:42.641 "traddr": "10.0.0.1", 00:23:42.641 "trsvcid": "59614" 00:23:42.641 }, 00:23:42.641 "auth": { 00:23:42.641 "state": "completed", 00:23:42.641 "digest": "sha384", 00:23:42.641 "dhgroup": "ffdhe4096" 00:23:42.641 } 00:23:42.641 } 00:23:42.641 ]' 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:42.641 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.897 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:42.898 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.898 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.898 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.898 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.154 08:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.087 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:44.344 08:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:44.910 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.910 08:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:45.167 { 00:23:45.167 "cntlid": 79, 00:23:45.167 "qid": 0, 00:23:45.167 "state": "enabled", 00:23:45.167 "thread": "nvmf_tgt_poll_group_000", 00:23:45.167 "listen_address": { 00:23:45.167 "trtype": "TCP", 00:23:45.167 "adrfam": "IPv4", 00:23:45.167 "traddr": "10.0.0.2", 00:23:45.167 "trsvcid": "4420" 00:23:45.167 }, 00:23:45.167 "peer_address": { 00:23:45.167 "trtype": "TCP", 00:23:45.167 "adrfam": "IPv4", 00:23:45.167 "traddr": "10.0.0.1", 00:23:45.167 "trsvcid": "59636" 00:23:45.167 }, 00:23:45.167 "auth": { 00:23:45.167 "state": "completed", 00:23:45.167 "digest": "sha384", 00:23:45.167 "dhgroup": "ffdhe4096" 00:23:45.167 } 00:23:45.167 } 00:23:45.167 ]' 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.167 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.423 08:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.355 08:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.611 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.208 00:23:47.208 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:47.208 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:47.208 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:47.464 { 00:23:47.464 "cntlid": 81, 00:23:47.464 "qid": 0, 00:23:47.464 "state": "enabled", 00:23:47.464 "thread": "nvmf_tgt_poll_group_000", 00:23:47.464 "listen_address": { 00:23:47.464 "trtype": "TCP", 00:23:47.464 "adrfam": "IPv4", 00:23:47.464 "traddr": "10.0.0.2", 00:23:47.464 "trsvcid": "4420" 00:23:47.464 }, 00:23:47.464 "peer_address": { 00:23:47.464 "trtype": "TCP", 00:23:47.464 "adrfam": "IPv4", 00:23:47.464 "traddr": "10.0.0.1", 00:23:47.464 "trsvcid": "59648" 00:23:47.464 }, 00:23:47.464 "auth": { 00:23:47.464 "state": "completed", 00:23:47.464 "digest": "sha384", 00:23:47.464 "dhgroup": "ffdhe6144" 00:23:47.464 } 00:23:47.464 } 00:23:47.464 ]' 00:23:47.464 08:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:47.464 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:47.464 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:47.464 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:47.464 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:47.722 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.722 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.722 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.722 08:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:49.096 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.096 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.097 08:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.664 00:23:49.664 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:49.664 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:49.664 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:49.921 { 00:23:49.921 "cntlid": 83, 00:23:49.921 "qid": 0, 00:23:49.921 "state": "enabled", 00:23:49.921 "thread": "nvmf_tgt_poll_group_000", 00:23:49.921 "listen_address": { 00:23:49.921 "trtype": "TCP", 00:23:49.921 "adrfam": "IPv4", 00:23:49.921 "traddr": "10.0.0.2", 00:23:49.921 "trsvcid": "4420" 00:23:49.921 }, 00:23:49.921 "peer_address": { 00:23:49.921 "trtype": "TCP", 00:23:49.921 "adrfam": "IPv4", 00:23:49.921 "traddr": "10.0.0.1", 00:23:49.921 "trsvcid": "40494" 00:23:49.921 }, 00:23:49.921 "auth": { 00:23:49.921 "state": "completed", 00:23:49.921 "digest": "sha384", 00:23:49.921 "dhgroup": "ffdhe6144" 00:23:49.921 } 00:23:49.921 } 00:23:49.921 ]' 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.921 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.486 08:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:51.431 08:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.688 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.253 00:23:52.253 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:52.253 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.253 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.512 { 00:23:52.512 "cntlid": 85, 00:23:52.512 "qid": 0, 00:23:52.512 "state": "enabled", 00:23:52.512 "thread": "nvmf_tgt_poll_group_000", 00:23:52.512 "listen_address": { 00:23:52.512 "trtype": "TCP", 00:23:52.512 "adrfam": "IPv4", 00:23:52.512 "traddr": "10.0.0.2", 00:23:52.512 "trsvcid": "4420" 00:23:52.512 }, 00:23:52.512 "peer_address": { 00:23:52.512 "trtype": "TCP", 00:23:52.512 "adrfam": "IPv4", 00:23:52.512 "traddr": "10.0.0.1", 00:23:52.512 "trsvcid": "40518" 00:23:52.512 }, 00:23:52.512 "auth": { 00:23:52.512 "state": "completed", 00:23:52.512 "digest": "sha384", 00:23:52.512 "dhgroup": "ffdhe6144" 00:23:52.512 } 00:23:52.512 } 00:23:52.512 ]' 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.512 08:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.512 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:52.512 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.512 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.512 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.512 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.769 08:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:53.737 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:53.993 08:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:54.559 00:23:54.559 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:54.559 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:54.559 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:54.816 { 00:23:54.816 "cntlid": 87, 00:23:54.816 "qid": 0, 00:23:54.816 "state": "enabled", 00:23:54.816 "thread": "nvmf_tgt_poll_group_000", 00:23:54.816 "listen_address": { 00:23:54.816 "trtype": "TCP", 00:23:54.816 "adrfam": "IPv4", 00:23:54.816 "traddr": "10.0.0.2", 00:23:54.816 "trsvcid": "4420" 00:23:54.816 }, 00:23:54.816 "peer_address": { 00:23:54.816 "trtype": "TCP", 00:23:54.816 "adrfam": "IPv4", 00:23:54.816 "traddr": "10.0.0.1", 00:23:54.816 "trsvcid": "40544" 00:23:54.816 }, 00:23:54.816 "auth": { 00:23:54.816 "state": "completed", 00:23:54.816 "digest": "sha384", 00:23:54.816 "dhgroup": "ffdhe6144" 00:23:54.816 } 00:23:54.816 } 00:23:54.816 ]' 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:54.816 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:55.084 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.085 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.085 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.343 08:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:56.281 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.540 08:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.477 00:23:57.477 08:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.477 08:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.477 08:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:57.735 { 00:23:57.735 "cntlid": 89, 00:23:57.735 "qid": 0, 00:23:57.735 "state": "enabled", 00:23:57.735 "thread": "nvmf_tgt_poll_group_000", 00:23:57.735 "listen_address": { 00:23:57.735 "trtype": "TCP", 00:23:57.735 "adrfam": "IPv4", 00:23:57.735 "traddr": "10.0.0.2", 00:23:57.735 "trsvcid": "4420" 00:23:57.735 }, 00:23:57.735 "peer_address": { 00:23:57.735 "trtype": "TCP", 00:23:57.735 "adrfam": "IPv4", 00:23:57.735 "traddr": "10.0.0.1", 00:23:57.735 "trsvcid": "40578" 00:23:57.735 }, 00:23:57.735 "auth": { 00:23:57.735 "state": "completed", 00:23:57.735 "digest": "sha384", 00:23:57.735 "dhgroup": "ffdhe8192" 00:23:57.735 } 00:23:57.735 } 00:23:57.735 ]' 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.735 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.992 08:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:58.925 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.181 08:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.110 00:24:00.110 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:00.110 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.110 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:00.365 { 00:24:00.365 "cntlid": 91, 00:24:00.365 "qid": 0, 00:24:00.365 "state": "enabled", 00:24:00.365 "thread": "nvmf_tgt_poll_group_000", 00:24:00.365 "listen_address": { 00:24:00.365 "trtype": "TCP", 00:24:00.365 "adrfam": "IPv4", 00:24:00.365 "traddr": "10.0.0.2", 00:24:00.365 "trsvcid": "4420" 00:24:00.365 }, 00:24:00.365 "peer_address": { 00:24:00.365 "trtype": "TCP", 00:24:00.365 "adrfam": "IPv4", 00:24:00.365 "traddr": "10.0.0.1", 00:24:00.365 "trsvcid": "41052" 00:24:00.365 }, 00:24:00.365 "auth": { 00:24:00.365 "state": "completed", 00:24:00.365 "digest": "sha384", 00:24:00.365 "dhgroup": "ffdhe8192" 00:24:00.365 } 00:24:00.365 } 00:24:00.365 ]' 00:24:00.365 08:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.622 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.878 08:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:01.812 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.070 08:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.007 00:24:03.007 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:03.007 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.007 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:03.263 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.263 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:03.264 { 00:24:03.264 "cntlid": 93, 00:24:03.264 "qid": 0, 00:24:03.264 "state": "enabled", 00:24:03.264 "thread": "nvmf_tgt_poll_group_000", 00:24:03.264 "listen_address": { 00:24:03.264 "trtype": "TCP", 00:24:03.264 "adrfam": "IPv4", 00:24:03.264 "traddr": "10.0.0.2", 00:24:03.264 "trsvcid": "4420" 00:24:03.264 }, 00:24:03.264 "peer_address": { 00:24:03.264 "trtype": "TCP", 00:24:03.264 "adrfam": "IPv4", 00:24:03.264 "traddr": "10.0.0.1", 00:24:03.264 "trsvcid": "41062" 00:24:03.264 }, 00:24:03.264 "auth": { 00:24:03.264 "state": "completed", 00:24:03.264 "digest": "sha384", 00:24:03.264 "dhgroup": "ffdhe8192" 00:24:03.264 } 00:24:03.264 } 00:24:03.264 ]' 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.264 08:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.522 08:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:04.456 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.456 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.456 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.456 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.714 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.971 08:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.971 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:04.971 08:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:05.904 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.904 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:05.904 { 00:24:05.904 "cntlid": 95, 00:24:05.904 "qid": 0, 00:24:05.904 "state": "enabled", 00:24:05.904 "thread": "nvmf_tgt_poll_group_000", 00:24:05.904 "listen_address": { 00:24:05.904 "trtype": "TCP", 00:24:05.904 "adrfam": "IPv4", 00:24:05.904 "traddr": "10.0.0.2", 00:24:05.904 "trsvcid": "4420" 00:24:05.904 }, 00:24:05.904 "peer_address": { 00:24:05.904 "trtype": "TCP", 00:24:05.904 "adrfam": "IPv4", 00:24:05.904 "traddr": "10.0.0.1", 00:24:05.904 "trsvcid": "41096" 00:24:05.905 }, 00:24:05.905 "auth": { 00:24:05.905 "state": "completed", 00:24:05.905 "digest": "sha384", 00:24:05.905 "dhgroup": "ffdhe8192" 00:24:05.905 } 00:24:05.905 } 00:24:05.905 ]' 00:24:05.905 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:05.905 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:05.905 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.162 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:06.162 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.162 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.162 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.162 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.420 08:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:07.417 08:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.417 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.981 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:07.981 { 00:24:07.981 "cntlid": 97, 00:24:07.981 "qid": 0, 00:24:07.981 "state": "enabled", 00:24:07.981 "thread": "nvmf_tgt_poll_group_000", 00:24:07.981 "listen_address": { 00:24:07.981 "trtype": "TCP", 00:24:07.981 "adrfam": "IPv4", 00:24:07.981 "traddr": "10.0.0.2", 00:24:07.981 "trsvcid": "4420" 00:24:07.981 }, 00:24:07.981 "peer_address": { 00:24:07.981 "trtype": "TCP", 00:24:07.981 "adrfam": "IPv4", 00:24:07.981 "traddr": "10.0.0.1", 00:24:07.981 "trsvcid": "34438" 00:24:07.981 }, 00:24:07.981 "auth": { 00:24:07.981 "state": "completed", 00:24:07.981 "digest": "sha512", 00:24:07.981 "dhgroup": "null" 00:24:07.981 } 00:24:07.981 } 00:24:07.981 ]' 00:24:07.981 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.238 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.239 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.497 08:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:09.430 08:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.688 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.946 00:24:09.946 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:09.946 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.946 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:10.203 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.203 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.203 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:10.204 { 00:24:10.204 "cntlid": 99, 00:24:10.204 "qid": 0, 00:24:10.204 "state": "enabled", 00:24:10.204 "thread": "nvmf_tgt_poll_group_000", 00:24:10.204 "listen_address": { 00:24:10.204 "trtype": "TCP", 00:24:10.204 "adrfam": "IPv4", 00:24:10.204 "traddr": "10.0.0.2", 00:24:10.204 "trsvcid": "4420" 00:24:10.204 }, 00:24:10.204 "peer_address": { 00:24:10.204 "trtype": "TCP", 00:24:10.204 "adrfam": "IPv4", 00:24:10.204 "traddr": "10.0.0.1", 00:24:10.204 "trsvcid": "34466" 00:24:10.204 }, 00:24:10.204 "auth": { 00:24:10.204 "state": "completed", 00:24:10.204 "digest": "sha512", 00:24:10.204 "dhgroup": "null" 00:24:10.204 } 00:24:10.204 } 00:24:10.204 ]' 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:10.204 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:10.461 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.461 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.461 08:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.719 08:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:11.652 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.909 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.166 00:24:12.166 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:12.166 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:12.166 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:12.423 { 00:24:12.423 "cntlid": 101, 00:24:12.423 "qid": 0, 00:24:12.423 "state": "enabled", 00:24:12.423 "thread": "nvmf_tgt_poll_group_000", 00:24:12.423 "listen_address": { 00:24:12.423 "trtype": "TCP", 00:24:12.423 "adrfam": "IPv4", 00:24:12.423 "traddr": "10.0.0.2", 00:24:12.423 "trsvcid": "4420" 00:24:12.423 }, 00:24:12.423 "peer_address": { 00:24:12.423 "trtype": "TCP", 00:24:12.423 "adrfam": "IPv4", 00:24:12.423 "traddr": "10.0.0.1", 00:24:12.423 "trsvcid": "34494" 00:24:12.423 }, 00:24:12.423 "auth": { 00:24:12.423 "state": "completed", 00:24:12.423 "digest": "sha512", 00:24:12.423 "dhgroup": "null" 00:24:12.423 } 00:24:12.423 } 00:24:12.423 ]' 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:12.423 08:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:12.423 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:12.423 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:12.423 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.423 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.423 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.682 08:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:13.617 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.184 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.442 00:24:14.442 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:14.442 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:14.442 08:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.699 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.699 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:14.700 { 00:24:14.700 "cntlid": 103, 00:24:14.700 "qid": 0, 00:24:14.700 "state": "enabled", 00:24:14.700 "thread": "nvmf_tgt_poll_group_000", 00:24:14.700 "listen_address": { 00:24:14.700 "trtype": "TCP", 00:24:14.700 "adrfam": "IPv4", 00:24:14.700 "traddr": "10.0.0.2", 00:24:14.700 "trsvcid": "4420" 00:24:14.700 }, 00:24:14.700 "peer_address": { 00:24:14.700 "trtype": "TCP", 00:24:14.700 "adrfam": "IPv4", 00:24:14.700 "traddr": "10.0.0.1", 00:24:14.700 "trsvcid": "34522" 00:24:14.700 }, 00:24:14.700 "auth": { 00:24:14.700 "state": "completed", 00:24:14.700 "digest": "sha512", 00:24:14.700 "dhgroup": "null" 00:24:14.700 } 00:24:14.700 } 00:24:14.700 ]' 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.700 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.958 08:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:15.895 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.152 08:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.409 00:24:16.666 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:16.666 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.666 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:16.924 { 00:24:16.924 "cntlid": 105, 00:24:16.924 "qid": 0, 00:24:16.924 "state": "enabled", 00:24:16.924 "thread": "nvmf_tgt_poll_group_000", 00:24:16.924 "listen_address": { 00:24:16.924 "trtype": "TCP", 00:24:16.924 "adrfam": "IPv4", 00:24:16.924 "traddr": "10.0.0.2", 00:24:16.924 "trsvcid": "4420" 00:24:16.924 }, 00:24:16.924 "peer_address": { 00:24:16.924 "trtype": "TCP", 00:24:16.924 "adrfam": "IPv4", 00:24:16.924 "traddr": "10.0.0.1", 00:24:16.924 "trsvcid": "34542" 00:24:16.924 }, 00:24:16.924 "auth": { 00:24:16.924 "state": "completed", 00:24:16.924 "digest": "sha512", 00:24:16.924 "dhgroup": "ffdhe2048" 00:24:16.924 } 00:24:16.924 } 00:24:16.924 ]' 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.924 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.180 08:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:18.113 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.370 08:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.628 00:24:18.887 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:18.887 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:18.887 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.145 { 00:24:19.145 "cntlid": 107, 00:24:19.145 "qid": 0, 00:24:19.145 "state": "enabled", 00:24:19.145 "thread": "nvmf_tgt_poll_group_000", 00:24:19.145 "listen_address": { 00:24:19.145 "trtype": "TCP", 00:24:19.145 "adrfam": "IPv4", 00:24:19.145 "traddr": "10.0.0.2", 00:24:19.145 "trsvcid": "4420" 00:24:19.145 }, 00:24:19.145 "peer_address": { 00:24:19.145 "trtype": "TCP", 00:24:19.145 "adrfam": "IPv4", 00:24:19.145 "traddr": "10.0.0.1", 00:24:19.145 "trsvcid": "38370" 00:24:19.145 }, 00:24:19.145 "auth": { 00:24:19.145 "state": "completed", 00:24:19.145 "digest": "sha512", 00:24:19.145 "dhgroup": "ffdhe2048" 00:24:19.145 } 00:24:19.145 } 00:24:19.145 ]' 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.145 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.402 08:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:20.337 08:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.595 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.856 00:24:21.146 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:21.146 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:21.146 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:21.406 { 00:24:21.406 "cntlid": 109, 00:24:21.406 "qid": 0, 00:24:21.406 "state": "enabled", 00:24:21.406 "thread": "nvmf_tgt_poll_group_000", 00:24:21.406 "listen_address": { 00:24:21.406 "trtype": "TCP", 00:24:21.406 "adrfam": "IPv4", 00:24:21.406 "traddr": "10.0.0.2", 00:24:21.406 "trsvcid": "4420" 00:24:21.406 }, 00:24:21.406 "peer_address": { 00:24:21.406 "trtype": "TCP", 00:24:21.406 "adrfam": "IPv4", 00:24:21.406 "traddr": "10.0.0.1", 00:24:21.406 "trsvcid": "38388" 00:24:21.406 }, 00:24:21.406 "auth": { 00:24:21.406 "state": "completed", 00:24:21.406 "digest": "sha512", 00:24:21.406 "dhgroup": "ffdhe2048" 00:24:21.406 } 00:24:21.406 } 00:24:21.406 ]' 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.406 08:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.663 08:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.598 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:22.855 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:23.112 00:24:23.370 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:23.370 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:23.370 08:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:23.627 { 00:24:23.627 "cntlid": 111, 00:24:23.627 "qid": 0, 00:24:23.627 "state": "enabled", 00:24:23.627 "thread": "nvmf_tgt_poll_group_000", 00:24:23.627 "listen_address": { 00:24:23.627 "trtype": "TCP", 00:24:23.627 "adrfam": "IPv4", 00:24:23.627 "traddr": "10.0.0.2", 00:24:23.627 "trsvcid": "4420" 00:24:23.627 }, 00:24:23.627 "peer_address": { 00:24:23.627 "trtype": "TCP", 00:24:23.627 "adrfam": "IPv4", 00:24:23.627 "traddr": "10.0.0.1", 00:24:23.627 "trsvcid": "38420" 00:24:23.627 }, 00:24:23.627 "auth": { 00:24:23.627 "state": "completed", 00:24:23.627 "digest": "sha512", 00:24:23.627 "dhgroup": "ffdhe2048" 00:24:23.627 } 00:24:23.627 } 00:24:23.627 ]' 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.627 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.886 08:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.823 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.081 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.338 00:24:25.338 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:25.338 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:25.338 08:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.596 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.596 { 00:24:25.597 "cntlid": 113, 00:24:25.597 "qid": 0, 00:24:25.597 "state": "enabled", 00:24:25.597 "thread": "nvmf_tgt_poll_group_000", 00:24:25.597 "listen_address": { 00:24:25.597 "trtype": "TCP", 00:24:25.597 "adrfam": "IPv4", 00:24:25.597 "traddr": "10.0.0.2", 00:24:25.597 "trsvcid": "4420" 00:24:25.597 }, 00:24:25.597 "peer_address": { 00:24:25.597 "trtype": "TCP", 00:24:25.597 "adrfam": "IPv4", 00:24:25.597 "traddr": "10.0.0.1", 00:24:25.597 "trsvcid": "38448" 00:24:25.597 }, 00:24:25.597 "auth": { 00:24:25.597 "state": "completed", 00:24:25.597 "digest": "sha512", 00:24:25.597 "dhgroup": "ffdhe3072" 00:24:25.597 } 00:24:25.597 } 00:24:25.597 ]' 00:24:25.597 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.597 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.597 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.854 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:25.854 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.854 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.854 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.854 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.115 08:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.096 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.354 08:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.611 00:24:27.611 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:27.611 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:27.611 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.869 { 00:24:27.869 "cntlid": 115, 00:24:27.869 "qid": 0, 00:24:27.869 "state": "enabled", 00:24:27.869 "thread": "nvmf_tgt_poll_group_000", 00:24:27.869 "listen_address": { 00:24:27.869 "trtype": "TCP", 00:24:27.869 "adrfam": "IPv4", 00:24:27.869 "traddr": "10.0.0.2", 00:24:27.869 "trsvcid": "4420" 00:24:27.869 }, 00:24:27.869 "peer_address": { 00:24:27.869 "trtype": "TCP", 00:24:27.869 "adrfam": "IPv4", 00:24:27.869 "traddr": "10.0.0.1", 00:24:27.869 "trsvcid": "56196" 00:24:27.869 }, 00:24:27.869 "auth": { 00:24:27.869 "state": "completed", 00:24:27.869 "digest": "sha512", 00:24:27.869 "dhgroup": "ffdhe3072" 00:24:27.869 } 00:24:27.869 } 00:24:27.869 ]' 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:27.869 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:28.127 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.127 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.127 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.397 08:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:29.333 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.592 08:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.850 00:24:29.850 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:29.850 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:29.850 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.108 { 00:24:30.108 "cntlid": 117, 00:24:30.108 "qid": 0, 00:24:30.108 "state": "enabled", 00:24:30.108 "thread": "nvmf_tgt_poll_group_000", 00:24:30.108 "listen_address": { 00:24:30.108 "trtype": "TCP", 00:24:30.108 "adrfam": "IPv4", 00:24:30.108 "traddr": "10.0.0.2", 00:24:30.108 "trsvcid": "4420" 00:24:30.108 }, 00:24:30.108 "peer_address": { 00:24:30.108 "trtype": "TCP", 00:24:30.108 "adrfam": "IPv4", 00:24:30.108 "traddr": "10.0.0.1", 00:24:30.108 "trsvcid": "56220" 00:24:30.108 }, 00:24:30.108 "auth": { 00:24:30.108 "state": "completed", 00:24:30.108 "digest": "sha512", 00:24:30.108 "dhgroup": "ffdhe3072" 00:24:30.108 } 00:24:30.108 } 00:24:30.108 ]' 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.108 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.367 08:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.302 08:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:31.561 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:32.127 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.127 { 00:24:32.127 "cntlid": 119, 00:24:32.127 "qid": 0, 00:24:32.127 "state": "enabled", 00:24:32.127 "thread": "nvmf_tgt_poll_group_000", 00:24:32.127 "listen_address": { 00:24:32.127 "trtype": "TCP", 00:24:32.127 "adrfam": "IPv4", 00:24:32.127 "traddr": "10.0.0.2", 00:24:32.127 "trsvcid": "4420" 00:24:32.127 }, 00:24:32.127 "peer_address": { 00:24:32.127 "trtype": "TCP", 00:24:32.127 "adrfam": "IPv4", 00:24:32.127 "traddr": "10.0.0.1", 00:24:32.127 "trsvcid": "56256" 00:24:32.127 }, 00:24:32.127 "auth": { 00:24:32.127 "state": "completed", 00:24:32.127 "digest": "sha512", 00:24:32.127 "dhgroup": "ffdhe3072" 00:24:32.127 } 00:24:32.127 } 00:24:32.127 ]' 00:24:32.127 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.385 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.386 08:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.650 08:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.583 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.839 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.096 00:24:34.096 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:34.096 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:34.096 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:34.353 { 00:24:34.353 "cntlid": 121, 00:24:34.353 "qid": 0, 00:24:34.353 "state": "enabled", 00:24:34.353 "thread": "nvmf_tgt_poll_group_000", 00:24:34.353 "listen_address": { 00:24:34.353 "trtype": "TCP", 00:24:34.353 "adrfam": "IPv4", 00:24:34.353 "traddr": "10.0.0.2", 00:24:34.353 "trsvcid": "4420" 00:24:34.353 }, 00:24:34.353 "peer_address": { 00:24:34.353 "trtype": "TCP", 00:24:34.353 "adrfam": "IPv4", 00:24:34.353 "traddr": "10.0.0.1", 00:24:34.353 "trsvcid": "56282" 00:24:34.353 }, 00:24:34.353 "auth": { 00:24:34.353 "state": "completed", 00:24:34.353 "digest": "sha512", 00:24:34.353 "dhgroup": "ffdhe4096" 00:24:34.353 } 00:24:34.353 } 00:24:34.353 ]' 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:34.353 08:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:34.614 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:34.614 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:34.614 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.614 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.614 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.917 08:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.849 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.106 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.363 00:24:36.363 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:36.363 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:36.363 08:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:36.620 { 00:24:36.620 "cntlid": 123, 00:24:36.620 "qid": 0, 00:24:36.620 "state": "enabled", 00:24:36.620 "thread": "nvmf_tgt_poll_group_000", 00:24:36.620 "listen_address": { 00:24:36.620 "trtype": "TCP", 00:24:36.620 "adrfam": "IPv4", 00:24:36.620 "traddr": "10.0.0.2", 00:24:36.620 "trsvcid": "4420" 00:24:36.620 }, 00:24:36.620 "peer_address": { 00:24:36.620 "trtype": "TCP", 00:24:36.620 "adrfam": "IPv4", 00:24:36.620 "traddr": "10.0.0.1", 00:24:36.620 "trsvcid": "56310" 00:24:36.620 }, 00:24:36.620 "auth": { 00:24:36.620 "state": "completed", 00:24:36.620 "digest": "sha512", 00:24:36.620 "dhgroup": "ffdhe4096" 00:24:36.620 } 00:24:36.620 } 00:24:36.620 ]' 00:24:36.620 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.878 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.136 08:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.070 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.328 08:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.585 00:24:38.585 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:38.585 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:38.585 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:38.843 { 00:24:38.843 "cntlid": 125, 00:24:38.843 "qid": 0, 00:24:38.843 "state": "enabled", 00:24:38.843 "thread": "nvmf_tgt_poll_group_000", 00:24:38.843 "listen_address": { 00:24:38.843 "trtype": "TCP", 00:24:38.843 "adrfam": "IPv4", 00:24:38.843 "traddr": "10.0.0.2", 00:24:38.843 "trsvcid": "4420" 00:24:38.843 }, 00:24:38.843 "peer_address": { 00:24:38.843 "trtype": "TCP", 00:24:38.843 "adrfam": "IPv4", 00:24:38.843 "traddr": "10.0.0.1", 00:24:38.843 "trsvcid": "60650" 00:24:38.843 }, 00:24:38.843 "auth": { 00:24:38.843 "state": "completed", 00:24:38.843 "digest": "sha512", 00:24:38.843 "dhgroup": "ffdhe4096" 00:24:38.843 } 00:24:38.843 } 00:24:38.843 ]' 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:38.843 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:39.101 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:39.101 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:39.101 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:39.101 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:39.101 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.358 08:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:40.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.293 08:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:40.550 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:40.806 00:24:40.807 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:40.807 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:40.807 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:41.065 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.065 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:41.065 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.065 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.322 08:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.322 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:41.322 { 00:24:41.322 "cntlid": 127, 00:24:41.322 "qid": 0, 00:24:41.322 "state": "enabled", 00:24:41.322 "thread": "nvmf_tgt_poll_group_000", 00:24:41.322 "listen_address": { 00:24:41.322 "trtype": "TCP", 00:24:41.322 "adrfam": "IPv4", 00:24:41.322 "traddr": "10.0.0.2", 00:24:41.322 "trsvcid": "4420" 00:24:41.322 }, 00:24:41.322 "peer_address": { 00:24:41.322 "trtype": "TCP", 00:24:41.322 "adrfam": "IPv4", 00:24:41.322 "traddr": "10.0.0.1", 00:24:41.323 "trsvcid": "60672" 00:24:41.323 }, 00:24:41.323 "auth": { 00:24:41.323 "state": "completed", 00:24:41.323 "digest": "sha512", 00:24:41.323 "dhgroup": "ffdhe4096" 00:24:41.323 } 00:24:41.323 } 00:24:41.323 ]' 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:41.323 08:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:41.579 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:42.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.515 08:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.774 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.340 00:24:43.340 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:43.340 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:43.340 08:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:43.599 { 00:24:43.599 "cntlid": 129, 00:24:43.599 "qid": 0, 00:24:43.599 "state": "enabled", 00:24:43.599 "thread": "nvmf_tgt_poll_group_000", 00:24:43.599 "listen_address": { 00:24:43.599 "trtype": "TCP", 00:24:43.599 "adrfam": "IPv4", 00:24:43.599 "traddr": "10.0.0.2", 00:24:43.599 "trsvcid": "4420" 00:24:43.599 }, 00:24:43.599 "peer_address": { 00:24:43.599 "trtype": "TCP", 00:24:43.599 "adrfam": "IPv4", 00:24:43.599 "traddr": "10.0.0.1", 00:24:43.599 "trsvcid": "60704" 00:24:43.599 }, 00:24:43.599 "auth": { 00:24:43.599 "state": "completed", 00:24:43.599 "digest": "sha512", 00:24:43.599 "dhgroup": "ffdhe6144" 00:24:43.599 } 00:24:43.599 } 00:24:43.599 ]' 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:43.599 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:43.857 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:43.857 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:43.857 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.115 08:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.049 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.307 08:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.872 00:24:45.872 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:45.872 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.872 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.129 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:46.129 { 00:24:46.129 "cntlid": 131, 00:24:46.129 "qid": 0, 00:24:46.129 "state": "enabled", 00:24:46.129 "thread": "nvmf_tgt_poll_group_000", 00:24:46.129 "listen_address": { 00:24:46.129 "trtype": "TCP", 00:24:46.129 "adrfam": "IPv4", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "trsvcid": "4420" 00:24:46.129 }, 00:24:46.129 "peer_address": { 00:24:46.129 "trtype": "TCP", 00:24:46.129 "adrfam": "IPv4", 00:24:46.129 "traddr": "10.0.0.1", 00:24:46.129 "trsvcid": "60748" 00:24:46.129 }, 00:24:46.129 "auth": { 00:24:46.129 "state": "completed", 00:24:46.130 "digest": "sha512", 00:24:46.130 "dhgroup": "ffdhe6144" 00:24:46.130 } 00:24:46.130 } 00:24:46.130 ]' 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.130 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.386 08:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.315 08:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.572 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.135 00:24:48.135 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:48.135 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:48.135 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:48.410 { 00:24:48.410 "cntlid": 133, 00:24:48.410 "qid": 0, 00:24:48.410 "state": "enabled", 00:24:48.410 "thread": "nvmf_tgt_poll_group_000", 00:24:48.410 "listen_address": { 00:24:48.410 "trtype": "TCP", 00:24:48.410 "adrfam": "IPv4", 00:24:48.410 "traddr": "10.0.0.2", 00:24:48.410 "trsvcid": "4420" 00:24:48.410 }, 00:24:48.410 "peer_address": { 00:24:48.410 "trtype": "TCP", 00:24:48.410 "adrfam": "IPv4", 00:24:48.410 "traddr": "10.0.0.1", 00:24:48.410 "trsvcid": "50448" 00:24:48.410 }, 00:24:48.410 "auth": { 00:24:48.410 "state": "completed", 00:24:48.410 "digest": "sha512", 00:24:48.410 "dhgroup": "ffdhe6144" 00:24:48.410 } 00:24:48.410 } 00:24:48.410 ]' 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:48.410 08:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:48.410 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:48.410 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:48.679 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.679 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.679 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.937 08:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.871 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:50.129 08:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:50.695 00:24:50.695 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:50.695 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:50.695 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:50.953 { 00:24:50.953 "cntlid": 135, 00:24:50.953 "qid": 0, 00:24:50.953 "state": "enabled", 00:24:50.953 "thread": "nvmf_tgt_poll_group_000", 00:24:50.953 "listen_address": { 00:24:50.953 "trtype": "TCP", 00:24:50.953 "adrfam": "IPv4", 00:24:50.953 "traddr": "10.0.0.2", 00:24:50.953 "trsvcid": "4420" 00:24:50.953 }, 00:24:50.953 "peer_address": { 00:24:50.953 "trtype": "TCP", 00:24:50.953 "adrfam": "IPv4", 00:24:50.953 "traddr": "10.0.0.1", 00:24:50.953 "trsvcid": "50488" 00:24:50.953 }, 00:24:50.953 "auth": { 00:24:50.953 "state": "completed", 00:24:50.953 "digest": "sha512", 00:24:50.953 "dhgroup": "ffdhe6144" 00:24:50.953 } 00:24:50.953 } 00:24:50.953 ]' 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.953 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:51.211 08:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:24:52.144 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:52.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.145 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.410 08:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.343 00:24:53.343 08:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:53.343 08:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:53.343 08:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:53.600 { 00:24:53.600 "cntlid": 137, 00:24:53.600 "qid": 0, 00:24:53.600 "state": "enabled", 00:24:53.600 "thread": "nvmf_tgt_poll_group_000", 00:24:53.600 "listen_address": { 00:24:53.600 "trtype": "TCP", 00:24:53.600 "adrfam": "IPv4", 00:24:53.600 "traddr": "10.0.0.2", 00:24:53.600 "trsvcid": "4420" 00:24:53.600 }, 00:24:53.600 "peer_address": { 00:24:53.600 "trtype": "TCP", 00:24:53.600 "adrfam": "IPv4", 00:24:53.600 "traddr": "10.0.0.1", 00:24:53.600 "trsvcid": "50522" 00:24:53.600 }, 00:24:53.600 "auth": { 00:24:53.600 "state": "completed", 00:24:53.600 "digest": "sha512", 00:24:53.600 "dhgroup": "ffdhe8192" 00:24:53.600 } 00:24:53.600 } 00:24:53.600 ]' 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.600 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.857 08:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.227 08:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.159 00:24:56.159 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:56.159 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.159 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:56.416 { 00:24:56.416 "cntlid": 139, 00:24:56.416 "qid": 0, 00:24:56.416 "state": "enabled", 00:24:56.416 "thread": "nvmf_tgt_poll_group_000", 00:24:56.416 "listen_address": { 00:24:56.416 "trtype": "TCP", 00:24:56.416 "adrfam": "IPv4", 00:24:56.416 "traddr": "10.0.0.2", 00:24:56.416 "trsvcid": "4420" 00:24:56.416 }, 00:24:56.416 "peer_address": { 00:24:56.416 "trtype": "TCP", 00:24:56.416 "adrfam": "IPv4", 00:24:56.416 "traddr": "10.0.0.1", 00:24:56.416 "trsvcid": "50536" 00:24:56.416 }, 00:24:56.416 "auth": { 00:24:56.416 "state": "completed", 00:24:56.416 "digest": "sha512", 00:24:56.416 "dhgroup": "ffdhe8192" 00:24:56.416 } 00:24:56.416 } 00:24:56.416 ]' 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.416 08:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:56.673 08:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NTNlNGI1MGY0MzIwZWI4NzFkMmM5MTIyMWM3NTc0OWTRaZhE: --dhchap-ctrl-secret DHHC-1:02:YTRmODRhZWFjMTMzZjZmYzkzNzE0NjYzNDEwNDcwOTUzNWFhYTk2OWRhYzM3NmRkPv61eQ==: 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:57.604 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.861 08:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.793 00:24:58.793 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:58.793 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.793 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:59.052 { 00:24:59.052 "cntlid": 141, 00:24:59.052 "qid": 0, 00:24:59.052 "state": "enabled", 00:24:59.052 "thread": "nvmf_tgt_poll_group_000", 00:24:59.052 "listen_address": { 00:24:59.052 "trtype": "TCP", 00:24:59.052 "adrfam": "IPv4", 00:24:59.052 "traddr": "10.0.0.2", 00:24:59.052 "trsvcid": "4420" 00:24:59.052 }, 00:24:59.052 "peer_address": { 00:24:59.052 "trtype": "TCP", 00:24:59.052 "adrfam": "IPv4", 00:24:59.052 "traddr": "10.0.0.1", 00:24:59.052 "trsvcid": "58120" 00:24:59.052 }, 00:24:59.052 "auth": { 00:24:59.052 "state": "completed", 00:24:59.052 "digest": "sha512", 00:24:59.052 "dhgroup": "ffdhe8192" 00:24:59.052 } 00:24:59.052 } 00:24:59.052 ]' 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:59.052 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:59.309 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:59.309 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:59.309 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:59.309 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:59.309 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.567 08:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZmEzMTk2ODM1ZDRkOTIxODVlZjM3YmNkOWE3Y2M3MWFlYzVjNTJmNTI4ZjIyMTFlpIhydA==: --dhchap-ctrl-secret DHHC-1:01:ZDQ5YTc2MTM4ZTZiZGIwYWJjYWZhZDk0NTc0YWE0YzRQwPWb: 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:00.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:00.499 08:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:00.756 08:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:01.688 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:01.688 08:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.689 08:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:01.945 { 00:25:01.945 "cntlid": 143, 00:25:01.945 "qid": 0, 00:25:01.945 "state": "enabled", 00:25:01.945 "thread": "nvmf_tgt_poll_group_000", 00:25:01.945 "listen_address": { 00:25:01.945 "trtype": "TCP", 00:25:01.945 "adrfam": "IPv4", 00:25:01.945 "traddr": "10.0.0.2", 00:25:01.945 "trsvcid": "4420" 00:25:01.945 }, 00:25:01.945 "peer_address": { 00:25:01.945 "trtype": "TCP", 00:25:01.945 "adrfam": "IPv4", 00:25:01.945 "traddr": "10.0.0.1", 00:25:01.945 "trsvcid": "58150" 00:25:01.945 }, 00:25:01.945 "auth": { 00:25:01.945 "state": "completed", 00:25:01.945 "digest": "sha512", 00:25:01.945 "dhgroup": "ffdhe8192" 00:25:01.945 } 00:25:01.945 } 00:25:01.945 ]' 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:01.945 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:02.201 08:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.158 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.415 08:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.344 00:25:04.344 08:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:04.344 08:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:04.344 08:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:04.600 { 00:25:04.600 "cntlid": 145, 00:25:04.600 "qid": 0, 00:25:04.600 "state": "enabled", 00:25:04.600 "thread": "nvmf_tgt_poll_group_000", 00:25:04.600 "listen_address": { 00:25:04.600 "trtype": "TCP", 00:25:04.600 "adrfam": "IPv4", 00:25:04.600 "traddr": "10.0.0.2", 00:25:04.600 "trsvcid": "4420" 00:25:04.600 }, 00:25:04.600 "peer_address": { 00:25:04.600 "trtype": "TCP", 00:25:04.600 "adrfam": "IPv4", 00:25:04.600 "traddr": "10.0.0.1", 00:25:04.600 "trsvcid": "58170" 00:25:04.600 }, 00:25:04.600 "auth": { 00:25:04.600 "state": "completed", 00:25:04.600 "digest": "sha512", 00:25:04.600 "dhgroup": "ffdhe8192" 00:25:04.600 } 00:25:04.600 } 00:25:04.600 ]' 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:04.600 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.857 08:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDQ0MThjYjU1N2ZlYWIxZDFkZDJlMmY5Y2MyMDg3OTIzODNjNGM5MTE1MDE1MDgwyzqqkA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyOWZjM2EyMjQ4OTUyYjAwODZlZDFjZjc2MjI3ZWM4MDU0N2I3NzAzNWU0Mzk3NmViNGFhMGIwMzk1M2FlNSWfi4M=: 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:05.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:05.788 08:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:06.720 request: 00:25:06.720 { 00:25:06.720 "name": "nvme0", 00:25:06.720 "trtype": "tcp", 00:25:06.720 "traddr": "10.0.0.2", 00:25:06.720 "adrfam": "ipv4", 00:25:06.720 "trsvcid": "4420", 00:25:06.720 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:06.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:06.720 "prchk_reftag": false, 00:25:06.720 "prchk_guard": false, 00:25:06.720 "hdgst": false, 00:25:06.720 "ddgst": false, 00:25:06.720 "dhchap_key": "key2", 00:25:06.720 "method": "bdev_nvme_attach_controller", 00:25:06.720 "req_id": 1 00:25:06.720 } 00:25:06.720 Got JSON-RPC error response 00:25:06.720 response: 00:25:06.720 { 00:25:06.720 "code": -5, 00:25:06.720 "message": "Input/output error" 00:25:06.720 } 00:25:06.720 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:06.720 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.720 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.721 08:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.653 request: 00:25:07.653 { 00:25:07.653 "name": "nvme0", 00:25:07.653 "trtype": "tcp", 00:25:07.653 "traddr": "10.0.0.2", 00:25:07.653 "adrfam": "ipv4", 00:25:07.653 "trsvcid": "4420", 00:25:07.653 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:07.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:07.653 "prchk_reftag": false, 00:25:07.653 "prchk_guard": false, 00:25:07.653 "hdgst": false, 00:25:07.653 "ddgst": false, 00:25:07.653 "dhchap_key": "key1", 00:25:07.653 "dhchap_ctrlr_key": "ckey2", 00:25:07.653 "method": "bdev_nvme_attach_controller", 00:25:07.653 "req_id": 1 00:25:07.653 } 00:25:07.653 Got JSON-RPC error response 00:25:07.653 response: 00:25:07.653 { 00:25:07.653 "code": -5, 00:25:07.653 "message": "Input/output error" 00:25:07.653 } 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.653 08:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.583 request: 00:25:08.583 { 00:25:08.583 "name": "nvme0", 00:25:08.583 "trtype": "tcp", 00:25:08.583 "traddr": "10.0.0.2", 00:25:08.583 "adrfam": "ipv4", 00:25:08.583 "trsvcid": "4420", 00:25:08.583 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:08.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:08.583 "prchk_reftag": false, 00:25:08.583 "prchk_guard": false, 00:25:08.583 "hdgst": false, 00:25:08.583 "ddgst": false, 00:25:08.583 "dhchap_key": "key1", 00:25:08.583 "dhchap_ctrlr_key": "ckey1", 00:25:08.583 "method": "bdev_nvme_attach_controller", 00:25:08.583 "req_id": 1 00:25:08.583 } 00:25:08.583 Got JSON-RPC error response 00:25:08.583 response: 00:25:08.583 { 00:25:08.583 "code": -5, 00:25:08.583 "message": "Input/output error" 00:25:08.583 } 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4128769 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4128769 ']' 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4128769 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4128769 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4128769' 00:25:08.583 killing process with pid 4128769 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4128769 00:25:08.583 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4128769 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4151349 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4151349 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4151349 ']' 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.840 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4151349 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4151349 ']' 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.097 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.354 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.354 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:09.354 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:25:09.354 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.354 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:09.611 08:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.541 00:25:10.541 08:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:10.541 08:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:10.541 08:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:10.798 { 00:25:10.798 "cntlid": 1, 00:25:10.798 "qid": 0, 00:25:10.798 "state": "enabled", 00:25:10.798 "thread": "nvmf_tgt_poll_group_000", 00:25:10.798 "listen_address": { 00:25:10.798 "trtype": "TCP", 00:25:10.798 "adrfam": "IPv4", 00:25:10.798 "traddr": "10.0.0.2", 00:25:10.798 "trsvcid": "4420" 00:25:10.798 }, 00:25:10.798 "peer_address": { 00:25:10.798 "trtype": "TCP", 00:25:10.798 "adrfam": "IPv4", 00:25:10.798 "traddr": "10.0.0.1", 00:25:10.798 "trsvcid": "57892" 00:25:10.798 }, 00:25:10.798 "auth": { 00:25:10.798 "state": "completed", 00:25:10.798 "digest": "sha512", 00:25:10.798 "dhgroup": "ffdhe8192" 00:25:10.798 } 00:25:10.798 } 00:25:10.798 ]' 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:10.798 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:11.109 08:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTUxNjViOGQyNzE1MzYzNTFjOTMxODZhMzk0ZTI2NDUzYjgxZTdjYmM3ODZlZDJlNWU0MTNkZDQzODQwMzM1YmGhgCo=: 00:25:12.038 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:12.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:12.038 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:25:12.039 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.296 08:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.553 request: 00:25:12.553 { 00:25:12.554 "name": "nvme0", 00:25:12.554 "trtype": "tcp", 00:25:12.554 "traddr": "10.0.0.2", 00:25:12.554 "adrfam": "ipv4", 00:25:12.554 "trsvcid": "4420", 00:25:12.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:12.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:12.554 "prchk_reftag": false, 00:25:12.554 "prchk_guard": false, 00:25:12.554 "hdgst": false, 00:25:12.554 "ddgst": false, 00:25:12.554 "dhchap_key": "key3", 00:25:12.554 "method": "bdev_nvme_attach_controller", 00:25:12.554 "req_id": 1 00:25:12.554 } 00:25:12.554 Got JSON-RPC error response 00:25:12.554 response: 00:25:12.554 { 00:25:12.554 "code": -5, 00:25:12.554 "message": "Input/output error" 00:25:12.554 } 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:12.554 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:12.811 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:13.069 request: 00:25:13.069 { 00:25:13.069 "name": "nvme0", 00:25:13.069 "trtype": "tcp", 00:25:13.069 "traddr": "10.0.0.2", 00:25:13.069 "adrfam": "ipv4", 00:25:13.069 "trsvcid": "4420", 00:25:13.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:13.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:13.069 "prchk_reftag": false, 00:25:13.069 "prchk_guard": false, 00:25:13.069 "hdgst": false, 00:25:13.069 "ddgst": false, 00:25:13.069 "dhchap_key": "key3", 00:25:13.069 "method": "bdev_nvme_attach_controller", 00:25:13.069 "req_id": 1 00:25:13.069 } 00:25:13.069 Got JSON-RPC error response 00:25:13.069 response: 00:25:13.069 { 00:25:13.069 "code": -5, 00:25:13.069 "message": "Input/output error" 00:25:13.069 } 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.069 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:13.327 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:13.328 08:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:13.328 08:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:13.584 request: 00:25:13.584 { 00:25:13.584 "name": "nvme0", 00:25:13.584 "trtype": "tcp", 00:25:13.584 "traddr": "10.0.0.2", 00:25:13.584 "adrfam": "ipv4", 00:25:13.584 "trsvcid": "4420", 00:25:13.584 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:13.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:25:13.584 "prchk_reftag": false, 00:25:13.584 "prchk_guard": false, 00:25:13.584 "hdgst": false, 00:25:13.584 "ddgst": false, 00:25:13.584 "dhchap_key": "key0", 00:25:13.584 "dhchap_ctrlr_key": "key1", 00:25:13.584 "method": "bdev_nvme_attach_controller", 00:25:13.584 "req_id": 1 00:25:13.584 } 00:25:13.584 Got JSON-RPC error response 00:25:13.584 response: 00:25:13.584 { 00:25:13.584 "code": -5, 00:25:13.584 "message": "Input/output error" 00:25:13.584 } 00:25:13.584 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:13.584 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:13.584 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:13.584 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:13.584 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:13.585 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:13.841 00:25:13.841 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:25:13.841 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:25:13.841 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:14.098 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.098 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:14.099 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4128894 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4128894 ']' 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4128894 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4128894 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4128894' 00:25:14.356 killing process with pid 4128894 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4128894 00:25:14.356 08:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4128894 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:14.921 rmmod nvme_tcp 00:25:14.921 rmmod nvme_fabrics 00:25:14.921 rmmod nvme_keyring 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4151349 ']' 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4151349 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4151349 ']' 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4151349 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151349 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151349' 00:25:14.921 killing process with pid 4151349 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4151349 00:25:14.921 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4151349 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.179 08:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.079 08:21:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:17.079 08:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KaO /tmp/spdk.key-sha256.est /tmp/spdk.key-sha384.FoQ /tmp/spdk.key-sha512.i4Y /tmp/spdk.key-sha512.5mS /tmp/spdk.key-sha384.1jz /tmp/spdk.key-sha256.m71 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:25:17.079 00:25:17.079 real 3m8.350s 00:25:17.079 user 7m18.227s 00:25:17.079 sys 0m24.792s 00:25:17.079 08:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.079 08:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.079 ************************************ 00:25:17.079 END TEST nvmf_auth_target 00:25:17.079 ************************************ 00:25:17.342 08:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:17.342 08:21:26 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:25:17.342 08:21:26 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:17.342 08:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:17.342 08:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.342 08:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:17.342 ************************************ 00:25:17.342 START TEST nvmf_bdevio_no_huge 00:25:17.342 ************************************ 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:17.342 * Looking for test storage... 00:25:17.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.342 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:17.343 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:17.343 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.343 08:21:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:19.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:19.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:19.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:19.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.262 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:19.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:25:19.263 00:25:19.263 --- 10.0.0.2 ping statistics --- 00:25:19.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.263 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:25:19.263 00:25:19.263 --- 10.0.0.1 ping statistics --- 00:25:19.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.263 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4153997 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4153997 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 4153997 ']' 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.263 08:21:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.520 [2024-07-21 08:21:28.915459] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:19.520 [2024-07-21 08:21:28.915546] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:19.520 [2024-07-21 08:21:28.992173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.520 [2024-07-21 08:21:29.083201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.520 [2024-07-21 08:21:29.083255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.520 [2024-07-21 08:21:29.083272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.520 [2024-07-21 08:21:29.083285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.520 [2024-07-21 08:21:29.083297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.520 [2024-07-21 08:21:29.083697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:19.520 [2024-07-21 08:21:29.083720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:19.520 [2024-07-21 08:21:29.083948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:19.520 [2024-07-21 08:21:29.083955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 [2024-07-21 08:21:29.205038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 Malloc0 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.777 [2024-07-21 08:21:29.243167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.777 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.777 { 00:25:19.777 "params": { 00:25:19.777 "name": "Nvme$subsystem", 00:25:19.777 "trtype": "$TEST_TRANSPORT", 00:25:19.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.777 "adrfam": "ipv4", 00:25:19.777 "trsvcid": "$NVMF_PORT", 00:25:19.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.778 "hdgst": ${hdgst:-false}, 00:25:19.778 "ddgst": ${ddgst:-false} 00:25:19.778 }, 00:25:19.778 "method": "bdev_nvme_attach_controller" 00:25:19.778 } 00:25:19.778 EOF 00:25:19.778 )") 00:25:19.778 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:25:19.778 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:25:19.778 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:25:19.778 08:21:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:19.778 "params": { 00:25:19.778 "name": "Nvme1", 00:25:19.778 "trtype": "tcp", 00:25:19.778 "traddr": "10.0.0.2", 00:25:19.778 "adrfam": "ipv4", 00:25:19.778 "trsvcid": "4420", 00:25:19.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.778 "hdgst": false, 00:25:19.778 "ddgst": false 00:25:19.778 }, 00:25:19.778 "method": "bdev_nvme_attach_controller" 00:25:19.778 }' 00:25:19.778 [2024-07-21 08:21:29.287165] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:19.778 [2024-07-21 08:21:29.287238] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4154028 ] 00:25:19.778 [2024-07-21 08:21:29.345852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.034 [2024-07-21 08:21:29.433259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.034 [2024-07-21 08:21:29.433309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.034 [2024-07-21 08:21:29.433312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.034 I/O targets: 00:25:20.034 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:20.034 00:25:20.034 00:25:20.034 CUnit - A unit testing framework for C - Version 2.1-3 00:25:20.034 http://cunit.sourceforge.net/ 00:25:20.034 00:25:20.034 00:25:20.034 Suite: bdevio tests on: Nvme1n1 00:25:20.034 Test: blockdev write read block ...passed 00:25:20.292 Test: blockdev write zeroes read block ...passed 00:25:20.292 Test: blockdev write zeroes read no split ...passed 00:25:20.292 Test: blockdev write zeroes read split ...passed 00:25:20.292 Test: blockdev write zeroes read split partial ...passed 00:25:20.292 Test: blockdev reset ...[2024-07-21 08:21:29.750227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.292 [2024-07-21 08:21:29.750344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad84e0 (9): Bad file descriptor 00:25:20.292 [2024-07-21 08:21:29.807202] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.292 passed 00:25:20.292 Test: blockdev write read 8 blocks ...passed 00:25:20.292 Test: blockdev write read size > 128k ...passed 00:25:20.292 Test: blockdev write read invalid size ...passed 00:25:20.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:20.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:20.292 Test: blockdev write read max offset ...passed 00:25:20.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:20.549 Test: blockdev writev readv 8 blocks ...passed 00:25:20.549 Test: blockdev writev readv 30 x 1block ...passed 00:25:20.549 Test: blockdev writev readv block ...passed 00:25:20.549 Test: blockdev writev readv size > 128k ...passed 00:25:20.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:20.549 Test: blockdev comparev and writev ...[2024-07-21 08:21:30.104960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.105042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.105371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.105417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.105754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.105800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.105815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.106140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.106163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:20.549 [2024-07-21 08:21:30.106184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:20.549 [2024-07-21 08:21:30.106200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:20.549 passed 00:25:20.806 Test: blockdev nvme passthru rw ...passed 00:25:20.806 Test: blockdev nvme passthru vendor specific ...[2024-07-21 08:21:30.189886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.806 [2024-07-21 08:21:30.189913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:20.806 [2024-07-21 08:21:30.190064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.806 [2024-07-21 08:21:30.190095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:20.806 [2024-07-21 08:21:30.190237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.806 [2024-07-21 08:21:30.190260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:20.806 [2024-07-21 08:21:30.190408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.806 [2024-07-21 08:21:30.190430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:20.806 passed 00:25:20.806 Test: blockdev nvme admin passthru ...passed 00:25:20.806 Test: blockdev copy ...passed 00:25:20.806 00:25:20.806 Run Summary: Type Total Ran Passed Failed Inactive 00:25:20.806 suites 1 1 n/a 0 0 00:25:20.806 tests 23 23 23 0 0 00:25:20.806 asserts 152 152 152 0 n/a 00:25:20.806 00:25:20.806 Elapsed time = 1.307 seconds 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.064 rmmod nvme_tcp 00:25:21.064 rmmod nvme_fabrics 00:25:21.064 rmmod nvme_keyring 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4153997 ']' 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4153997 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 4153997 ']' 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 4153997 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4153997 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4153997' 00:25:21.064 killing process with pid 4153997 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 4153997 00:25:21.064 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 4153997 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.631 08:21:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.537 08:21:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.537 00:25:23.537 real 0m6.299s 00:25:23.537 user 0m10.138s 00:25:23.537 sys 0m2.367s 00:25:23.537 08:21:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.537 08:21:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:23.537 ************************************ 00:25:23.537 END TEST nvmf_bdevio_no_huge 00:25:23.537 ************************************ 00:25:23.537 08:21:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:23.537 08:21:33 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:23.537 08:21:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:23.537 08:21:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.537 08:21:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.537 ************************************ 00:25:23.537 START TEST nvmf_tls 00:25:23.537 ************************************ 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:23.537 * Looking for test storage... 00:25:23.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.537 08:21:33 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.538 08:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.445 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:25.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:25.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:25.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:25.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.446 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:25:25.703 00:25:25.703 --- 10.0.0.2 ping statistics --- 00:25:25.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.703 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:25:25.703 00:25:25.703 --- 10.0.0.1 ping statistics --- 00:25:25.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.703 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4156158 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4156158 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4156158 ']' 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.703 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.703 [2024-07-21 08:21:35.279424] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:25.703 [2024-07-21 08:21:35.279511] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.703 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.961 [2024-07-21 08:21:35.353117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.961 [2024-07-21 08:21:35.440799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.961 [2024-07-21 08:21:35.440856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.961 [2024-07-21 08:21:35.440887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.961 [2024-07-21 08:21:35.440899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.961 [2024-07-21 08:21:35.440909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.961 [2024-07-21 08:21:35.440949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:25:25.961 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:26.218 true 00:25:26.218 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:26.218 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:25:26.476 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:25:26.476 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:25:26.476 08:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:26.734 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:26.734 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:25:26.992 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:25:26.992 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:25:26.992 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:27.250 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:27.250 08:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:25:27.507 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:25:27.507 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:25:27.507 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:27.507 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:25:27.765 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:25:27.765 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:25:27.765 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:28.022 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:28.023 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:25:28.280 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:25:28.280 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:25:28.280 08:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:28.538 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:28.538 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:28.796 08:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.eknzdKUwyC 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.eqa9wJmXbg 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.eknzdKUwyC 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.eqa9wJmXbg 00:25:29.055 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:29.313 08:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:29.569 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.eknzdKUwyC 00:25:29.569 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eknzdKUwyC 00:25:29.569 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:29.826 [2024-07-21 08:21:39.287468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.827 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:30.085 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:30.343 [2024-07-21 08:21:39.869048] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.343 [2024-07-21 08:21:39.869296] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.343 08:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:30.600 malloc0 00:25:30.600 08:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:30.858 08:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eknzdKUwyC 00:25:31.115 [2024-07-21 08:21:40.659125] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:31.115 08:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eknzdKUwyC 00:25:31.115 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.352 Initializing NVMe Controllers 00:25:43.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.352 Initialization complete. Launching workers. 00:25:43.352 ======================================================== 00:25:43.352 Latency(us) 00:25:43.352 Device Information : IOPS MiB/s Average min max 00:25:43.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7456.79 29.13 8585.70 1258.42 10049.21 00:25:43.352 ======================================================== 00:25:43.352 Total : 7456.79 29.13 8585.70 1258.42 10049.21 00:25:43.352 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eknzdKUwyC 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eknzdKUwyC' 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4157984 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4157984 /var/tmp/bdevperf.sock 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4157984 ']' 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:43.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.352 08:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:43.352 [2024-07-21 08:21:50.818089] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:43.352 [2024-07-21 08:21:50.818171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157984 ] 00:25:43.352 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.352 [2024-07-21 08:21:50.874283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.352 [2024-07-21 08:21:50.957686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.352 08:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.352 08:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:43.352 08:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eknzdKUwyC 00:25:43.353 [2024-07-21 08:21:51.295384] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:43.353 [2024-07-21 08:21:51.295502] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:43.353 TLSTESTn1 00:25:43.353 08:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.353 Running I/O for 10 seconds... 00:25:53.358 00:25:53.358 Latency(us) 00:25:53.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.358 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.358 Verification LBA range: start 0x0 length 0x2000 00:25:53.358 TLSTESTn1 : 10.02 3405.51 13.30 0.00 0.00 37522.23 6747.78 50486.99 00:25:53.358 =================================================================================================================== 00:25:53.358 Total : 3405.51 13.30 0.00 0.00 37522.23 6747.78 50486.99 00:25:53.358 0 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4157984 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4157984 ']' 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4157984 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4157984 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4157984' 00:25:53.358 killing process with pid 4157984 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4157984 00:25:53.358 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.358 00:25:53.358 Latency(us) 00:25:53.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.358 =================================================================================================================== 00:25:53.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.358 [2024-07-21 08:22:01.579011] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4157984 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqa9wJmXbg 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqa9wJmXbg 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqa9wJmXbg 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eqa9wJmXbg' 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4159290 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4159290 /var/tmp/bdevperf.sock 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159290 ']' 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.358 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.359 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.359 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.359 08:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.359 [2024-07-21 08:22:01.851560] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:53.359 [2024-07-21 08:22:01.851647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159290 ] 00:25:53.359 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.359 [2024-07-21 08:22:01.907367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.359 [2024-07-21 08:22:01.988764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqa9wJmXbg 00:25:53.359 [2024-07-21 08:22:02.323282] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:53.359 [2024-07-21 08:22:02.323407] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:53.359 [2024-07-21 08:22:02.329773] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:53.359 [2024-07-21 08:22:02.330282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1267ab0 (107): Transport endpoint is not connected 00:25:53.359 [2024-07-21 08:22:02.331273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1267ab0 (9): Bad file descriptor 00:25:53.359 [2024-07-21 08:22:02.332272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.359 [2024-07-21 08:22:02.332292] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:53.359 [2024-07-21 08:22:02.332324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.359 request: 00:25:53.359 { 00:25:53.359 "name": "TLSTEST", 00:25:53.359 "trtype": "tcp", 00:25:53.359 "traddr": "10.0.0.2", 00:25:53.359 "adrfam": "ipv4", 00:25:53.359 "trsvcid": "4420", 00:25:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.359 "prchk_reftag": false, 00:25:53.359 "prchk_guard": false, 00:25:53.359 "hdgst": false, 00:25:53.359 "ddgst": false, 00:25:53.359 "psk": "/tmp/tmp.eqa9wJmXbg", 00:25:53.359 "method": "bdev_nvme_attach_controller", 00:25:53.359 "req_id": 1 00:25:53.359 } 00:25:53.359 Got JSON-RPC error response 00:25:53.359 response: 00:25:53.359 { 00:25:53.359 "code": -5, 00:25:53.359 "message": "Input/output error" 00:25:53.359 } 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4159290 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159290 ']' 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159290 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159290 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159290' 00:25:53.359 killing process with pid 4159290 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159290 00:25:53.359 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.359 00:25:53.359 Latency(us) 00:25:53.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.359 =================================================================================================================== 00:25:53.359 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:53.359 [2024-07-21 08:22:02.370412] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159290 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eknzdKUwyC 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eknzdKUwyC 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eknzdKUwyC 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eknzdKUwyC' 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4159314 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4159314 /var/tmp/bdevperf.sock 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159314 ']' 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.359 [2024-07-21 08:22:02.604727] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:53.359 [2024-07-21 08:22:02.604809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159314 ] 00:25:53.359 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.359 [2024-07-21 08:22:02.666877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.359 [2024-07-21 08:22:02.754324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:53.359 08:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.eknzdKUwyC 00:25:53.616 [2024-07-21 08:22:03.081967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:53.616 [2024-07-21 08:22:03.082098] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:53.616 [2024-07-21 08:22:03.091580] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:53.616 [2024-07-21 08:22:03.091636] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:53.616 [2024-07-21 08:22:03.091694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:53.616 [2024-07-21 08:22:03.091969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2ab0 (107): Transport endpoint is not connected 00:25:53.616 [2024-07-21 08:22:03.092959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2ab0 (9): Bad file descriptor 00:25:53.616 [2024-07-21 08:22:03.093958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.616 [2024-07-21 08:22:03.093982] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:53.616 [2024-07-21 08:22:03.094015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.616 request: 00:25:53.616 { 00:25:53.616 "name": "TLSTEST", 00:25:53.616 "trtype": "tcp", 00:25:53.616 "traddr": "10.0.0.2", 00:25:53.616 "adrfam": "ipv4", 00:25:53.616 "trsvcid": "4420", 00:25:53.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:53.616 "prchk_reftag": false, 00:25:53.616 "prchk_guard": false, 00:25:53.616 "hdgst": false, 00:25:53.616 "ddgst": false, 00:25:53.616 "psk": "/tmp/tmp.eknzdKUwyC", 00:25:53.616 "method": "bdev_nvme_attach_controller", 00:25:53.616 "req_id": 1 00:25:53.616 } 00:25:53.616 Got JSON-RPC error response 00:25:53.616 response: 00:25:53.616 { 00:25:53.616 "code": -5, 00:25:53.616 "message": "Input/output error" 00:25:53.616 } 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4159314 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159314 ']' 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159314 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159314 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:53.616 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:53.617 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159314' 00:25:53.617 killing process with pid 4159314 00:25:53.617 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159314 00:25:53.617 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.617 00:25:53.617 Latency(us) 00:25:53.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.617 =================================================================================================================== 00:25:53.617 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:53.617 [2024-07-21 08:22:03.143455] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:53.617 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159314 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eknzdKUwyC 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eknzdKUwyC 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eknzdKUwyC 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eknzdKUwyC' 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4159444 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4159444 /var/tmp/bdevperf.sock 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159444 ']' 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.874 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.874 [2024-07-21 08:22:03.394556] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:53.874 [2024-07-21 08:22:03.394642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159444 ] 00:25:53.874 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.874 [2024-07-21 08:22:03.450358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.131 [2024-07-21 08:22:03.533841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.131 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.131 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:54.131 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eknzdKUwyC 00:25:54.388 [2024-07-21 08:22:03.892753] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:54.388 [2024-07-21 08:22:03.892874] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:54.388 [2024-07-21 08:22:03.903691] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:54.388 [2024-07-21 08:22:03.903728] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:54.388 [2024-07-21 08:22:03.903782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:54.388 [2024-07-21 08:22:03.904767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55ab0 (107): Transport endpoint is not connected 00:25:54.388 [2024-07-21 08:22:03.905758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55ab0 (9): Bad file descriptor 00:25:54.388 [2024-07-21 08:22:03.906756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:54.388 [2024-07-21 08:22:03.906775] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:54.388 [2024-07-21 08:22:03.906792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:54.388 request: 00:25:54.388 { 00:25:54.388 "name": "TLSTEST", 00:25:54.388 "trtype": "tcp", 00:25:54.388 "traddr": "10.0.0.2", 00:25:54.388 "adrfam": "ipv4", 00:25:54.388 "trsvcid": "4420", 00:25:54.388 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:54.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.388 "prchk_reftag": false, 00:25:54.388 "prchk_guard": false, 00:25:54.388 "hdgst": false, 00:25:54.388 "ddgst": false, 00:25:54.388 "psk": "/tmp/tmp.eknzdKUwyC", 00:25:54.388 "method": "bdev_nvme_attach_controller", 00:25:54.388 "req_id": 1 00:25:54.388 } 00:25:54.388 Got JSON-RPC error response 00:25:54.388 response: 00:25:54.388 { 00:25:54.388 "code": -5, 00:25:54.388 "message": "Input/output error" 00:25:54.388 } 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4159444 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159444 ']' 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159444 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159444 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159444' 00:25:54.388 killing process with pid 4159444 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159444 00:25:54.388 Received shutdown signal, test time was about 10.000000 seconds 00:25:54.388 00:25:54.388 Latency(us) 00:25:54.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.388 =================================================================================================================== 00:25:54.388 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:54.388 [2024-07-21 08:22:03.948630] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:54.388 08:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159444 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4159579 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:54.645 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4159579 /var/tmp/bdevperf.sock 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159579 ']' 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.646 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.646 [2024-07-21 08:22:04.180179] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:54.646 [2024-07-21 08:22:04.180257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159579 ] 00:25:54.646 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.646 [2024-07-21 08:22:04.237468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.903 [2024-07-21 08:22:04.323975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.903 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.903 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:54.903 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:55.161 [2024-07-21 08:22:04.645735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:55.161 [2024-07-21 08:22:04.647510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6de60 (9): Bad file descriptor 00:25:55.161 [2024-07-21 08:22:04.648505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.161 [2024-07-21 08:22:04.648525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:55.161 [2024-07-21 08:22:04.648555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.161 request: 00:25:55.161 { 00:25:55.161 "name": "TLSTEST", 00:25:55.161 "trtype": "tcp", 00:25:55.161 "traddr": "10.0.0.2", 00:25:55.161 "adrfam": "ipv4", 00:25:55.161 "trsvcid": "4420", 00:25:55.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.161 "prchk_reftag": false, 00:25:55.161 "prchk_guard": false, 00:25:55.161 "hdgst": false, 00:25:55.161 "ddgst": false, 00:25:55.161 "method": "bdev_nvme_attach_controller", 00:25:55.161 "req_id": 1 00:25:55.161 } 00:25:55.161 Got JSON-RPC error response 00:25:55.161 response: 00:25:55.161 { 00:25:55.161 "code": -5, 00:25:55.161 "message": "Input/output error" 00:25:55.161 } 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4159579 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159579 ']' 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159579 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159579 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159579' 00:25:55.161 killing process with pid 4159579 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159579 00:25:55.161 Received shutdown signal, test time was about 10.000000 seconds 00:25:55.161 00:25:55.161 Latency(us) 00:25:55.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.161 =================================================================================================================== 00:25:55.161 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:55.161 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159579 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4156158 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4156158 ']' 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4156158 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4156158 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4156158' 00:25:55.419 killing process with pid 4156158 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4156158 00:25:55.419 [2024-07-21 08:22:04.938131] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:55.419 08:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4156158 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JPBYj9wOX0 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JPBYj9wOX0 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4159727 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4159727 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159727 ']' 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.677 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.677 [2024-07-21 08:22:05.289443] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:55.677 [2024-07-21 08:22:05.289539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.936 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.936 [2024-07-21 08:22:05.355685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.936 [2024-07-21 08:22:05.442656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.936 [2024-07-21 08:22:05.442713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.936 [2024-07-21 08:22:05.442742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.936 [2024-07-21 08:22:05.442753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.936 [2024-07-21 08:22:05.442763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.936 [2024-07-21 08:22:05.442790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.936 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.936 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:55.936 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.936 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.936 08:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.193 08:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.193 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:25:56.193 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JPBYj9wOX0 00:25:56.193 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:56.193 [2024-07-21 08:22:05.802132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.193 08:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:56.451 08:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:56.708 [2024-07-21 08:22:06.291503] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:56.708 [2024-07-21 08:22:06.291747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.708 08:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:56.965 malloc0 00:25:56.965 08:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:57.222 08:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:25:57.480 [2024-07-21 08:22:07.082112] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JPBYj9wOX0 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JPBYj9wOX0' 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4159895 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4159895 /var/tmp/bdevperf.sock 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4159895 ']' 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.480 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:57.738 [2024-07-21 08:22:07.143115] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:25:57.738 [2024-07-21 08:22:07.143191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159895 ] 00:25:57.738 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.738 [2024-07-21 08:22:07.204938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.738 [2024-07-21 08:22:07.290418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.995 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.995 08:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:57.995 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:25:57.995 [2024-07-21 08:22:07.610742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.995 [2024-07-21 08:22:07.610853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:58.253 TLSTESTn1 00:25:58.253 08:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:58.253 Running I/O for 10 seconds... 00:26:08.253 00:26:08.253 Latency(us) 00:26:08.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:08.253 Verification LBA range: start 0x0 length 0x2000 00:26:08.253 TLSTESTn1 : 10.04 2413.16 9.43 0.00 0.00 52921.17 7184.69 62137.84 00:26:08.253 =================================================================================================================== 00:26:08.253 Total : 2413.16 9.43 0.00 0.00 52921.17 7184.69 62137.84 00:26:08.253 0 00:26:08.253 08:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.253 08:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4159895 00:26:08.253 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159895 ']' 00:26:08.253 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159895 00:26:08.253 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159895 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159895' 00:26:08.511 killing process with pid 4159895 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159895 00:26:08.511 Received shutdown signal, test time was about 10.000000 seconds 00:26:08.511 00:26:08.511 Latency(us) 00:26:08.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.511 =================================================================================================================== 00:26:08.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.511 [2024-07-21 08:22:17.911789] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:08.511 08:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159895 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JPBYj9wOX0 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JPBYj9wOX0 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JPBYj9wOX0 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.511 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JPBYj9wOX0 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JPBYj9wOX0' 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4161209 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4161209 /var/tmp/bdevperf.sock 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4161209 ']' 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.769 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.769 [2024-07-21 08:22:18.185358] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:08.769 [2024-07-21 08:22:18.185437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161209 ] 00:26:08.769 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.769 [2024-07-21 08:22:18.246001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.769 [2024-07-21 08:22:18.332143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.026 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.026 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:09.026 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:26:09.283 [2024-07-21 08:22:18.660985] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:09.283 [2024-07-21 08:22:18.661072] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:09.283 [2024-07-21 08:22:18.661087] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JPBYj9wOX0 00:26:09.283 request: 00:26:09.283 { 00:26:09.283 "name": "TLSTEST", 00:26:09.283 "trtype": "tcp", 00:26:09.283 "traddr": "10.0.0.2", 00:26:09.283 "adrfam": "ipv4", 00:26:09.283 "trsvcid": "4420", 00:26:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.283 "prchk_reftag": false, 00:26:09.283 "prchk_guard": false, 00:26:09.283 "hdgst": false, 00:26:09.283 "ddgst": false, 00:26:09.283 "psk": "/tmp/tmp.JPBYj9wOX0", 00:26:09.283 "method": "bdev_nvme_attach_controller", 00:26:09.283 "req_id": 1 00:26:09.283 } 00:26:09.283 Got JSON-RPC error response 00:26:09.283 response: 00:26:09.283 { 00:26:09.283 "code": -1, 00:26:09.283 "message": "Operation not permitted" 00:26:09.283 } 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4161209 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4161209 ']' 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4161209 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161209 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161209' 00:26:09.283 killing process with pid 4161209 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4161209 00:26:09.283 Received shutdown signal, test time was about 10.000000 seconds 00:26:09.283 00:26:09.283 Latency(us) 00:26:09.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.283 =================================================================================================================== 00:26:09.283 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4161209 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4159727 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4159727 ']' 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4159727 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.283 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159727 00:26:09.541 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:09.541 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:09.541 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159727' 00:26:09.541 killing process with pid 4159727 00:26:09.541 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4159727 00:26:09.541 [2024-07-21 08:22:18.922344] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:09.541 08:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4159727 00:26:09.541 08:22:19 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:26:09.541 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.541 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:09.541 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4161352 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4161352 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4161352 ']' 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.798 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.812 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.812 [2024-07-21 08:22:19.222779] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:09.812 [2024-07-21 08:22:19.222861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.812 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.812 [2024-07-21 08:22:19.289322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.812 [2024-07-21 08:22:19.376531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.812 [2024-07-21 08:22:19.376597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.812 [2024-07-21 08:22:19.376641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.813 [2024-07-21 08:22:19.376657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.813 [2024-07-21 08:22:19.376669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.813 [2024-07-21 08:22:19.376707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JPBYj9wOX0 00:26:10.070 08:22:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:10.327 [2024-07-21 08:22:19.757306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.327 08:22:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:10.583 08:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:10.840 [2024-07-21 08:22:20.246710] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:10.840 [2024-07-21 08:22:20.246956] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.840 08:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:11.097 malloc0 00:26:11.097 08:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:11.355 08:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:26:11.612 [2024-07-21 08:22:20.992597] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:11.612 [2024-07-21 08:22:20.992651] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:11.612 [2024-07-21 08:22:20.992686] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:11.612 request: 00:26:11.612 { 00:26:11.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.612 "host": "nqn.2016-06.io.spdk:host1", 00:26:11.612 "psk": "/tmp/tmp.JPBYj9wOX0", 00:26:11.612 "method": "nvmf_subsystem_add_host", 00:26:11.612 "req_id": 1 00:26:11.612 } 00:26:11.612 Got JSON-RPC error response 00:26:11.612 response: 00:26:11.612 { 00:26:11.612 "code": -32603, 00:26:11.612 "message": "Internal error" 00:26:11.612 } 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4161352 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4161352 ']' 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4161352 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161352 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161352' 00:26:11.612 killing process with pid 4161352 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4161352 00:26:11.612 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4161352 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JPBYj9wOX0 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4161645 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4161645 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4161645 ']' 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.870 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 [2024-07-21 08:22:21.351042] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:11.870 [2024-07-21 08:22:21.351128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.870 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.870 [2024-07-21 08:22:21.417544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.127 [2024-07-21 08:22:21.504759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.127 [2024-07-21 08:22:21.504814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.127 [2024-07-21 08:22:21.504830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.127 [2024-07-21 08:22:21.504843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.127 [2024-07-21 08:22:21.504855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.127 [2024-07-21 08:22:21.504885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:26:12.127 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JPBYj9wOX0 00:26:12.128 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:12.385 [2024-07-21 08:22:21.872242] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.385 08:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:12.642 08:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:12.899 [2024-07-21 08:22:22.361553] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:12.899 [2024-07-21 08:22:22.361796] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.899 08:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:13.156 malloc0 00:26:13.156 08:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:13.414 08:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:26:13.671 [2024-07-21 08:22:23.099111] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4161873 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4161873 /var/tmp/bdevperf.sock 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4161873 ']' 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.671 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.671 [2024-07-21 08:22:23.160594] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:13.671 [2024-07-21 08:22:23.160694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161873 ] 00:26:13.671 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.671 [2024-07-21 08:22:23.219371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.928 [2024-07-21 08:22:23.303683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.928 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.928 08:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:13.928 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:26:14.186 [2024-07-21 08:22:23.636891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:14.186 [2024-07-21 08:22:23.637047] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:14.186 TLSTESTn1 00:26:14.186 08:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:14.444 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:26:14.444 "subsystems": [ 00:26:14.444 { 00:26:14.444 "subsystem": "keyring", 00:26:14.444 "config": [] 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "subsystem": "iobuf", 00:26:14.444 "config": [ 00:26:14.444 { 00:26:14.444 "method": "iobuf_set_options", 00:26:14.444 "params": { 00:26:14.444 "small_pool_count": 8192, 00:26:14.444 "large_pool_count": 1024, 00:26:14.444 "small_bufsize": 8192, 00:26:14.444 "large_bufsize": 135168 00:26:14.444 } 00:26:14.444 } 00:26:14.444 ] 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "subsystem": "sock", 00:26:14.444 "config": [ 00:26:14.444 { 00:26:14.444 "method": "sock_set_default_impl", 00:26:14.444 "params": { 00:26:14.444 "impl_name": "posix" 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "sock_impl_set_options", 00:26:14.444 "params": { 00:26:14.444 "impl_name": "ssl", 00:26:14.444 "recv_buf_size": 4096, 00:26:14.444 "send_buf_size": 4096, 00:26:14.444 "enable_recv_pipe": true, 00:26:14.444 "enable_quickack": false, 00:26:14.444 "enable_placement_id": 0, 00:26:14.444 "enable_zerocopy_send_server": true, 00:26:14.444 "enable_zerocopy_send_client": false, 00:26:14.444 "zerocopy_threshold": 0, 00:26:14.444 "tls_version": 0, 00:26:14.444 "enable_ktls": false 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "sock_impl_set_options", 00:26:14.444 "params": { 00:26:14.444 "impl_name": "posix", 00:26:14.444 "recv_buf_size": 2097152, 00:26:14.444 "send_buf_size": 2097152, 00:26:14.444 "enable_recv_pipe": true, 00:26:14.444 "enable_quickack": false, 00:26:14.444 "enable_placement_id": 0, 00:26:14.444 "enable_zerocopy_send_server": true, 00:26:14.444 "enable_zerocopy_send_client": false, 00:26:14.444 "zerocopy_threshold": 0, 00:26:14.444 "tls_version": 0, 00:26:14.444 "enable_ktls": false 00:26:14.444 } 00:26:14.444 } 00:26:14.444 ] 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "subsystem": "vmd", 00:26:14.444 "config": [] 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "subsystem": "accel", 00:26:14.444 "config": [ 00:26:14.444 { 00:26:14.444 "method": "accel_set_options", 00:26:14.444 "params": { 00:26:14.444 "small_cache_size": 128, 00:26:14.444 "large_cache_size": 16, 00:26:14.444 "task_count": 2048, 00:26:14.444 "sequence_count": 2048, 00:26:14.444 "buf_count": 2048 00:26:14.444 } 00:26:14.444 } 00:26:14.444 ] 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "subsystem": "bdev", 00:26:14.444 "config": [ 00:26:14.444 { 00:26:14.444 "method": "bdev_set_options", 00:26:14.444 "params": { 00:26:14.444 "bdev_io_pool_size": 65535, 00:26:14.444 "bdev_io_cache_size": 256, 00:26:14.444 "bdev_auto_examine": true, 00:26:14.444 "iobuf_small_cache_size": 128, 00:26:14.444 "iobuf_large_cache_size": 16 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_raid_set_options", 00:26:14.444 "params": { 00:26:14.444 "process_window_size_kb": 1024, 00:26:14.444 "process_max_bandwidth_mb_sec": 0 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_iscsi_set_options", 00:26:14.444 "params": { 00:26:14.444 "timeout_sec": 30 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_nvme_set_options", 00:26:14.444 "params": { 00:26:14.444 "action_on_timeout": "none", 00:26:14.444 "timeout_us": 0, 00:26:14.444 "timeout_admin_us": 0, 00:26:14.444 "keep_alive_timeout_ms": 10000, 00:26:14.444 "arbitration_burst": 0, 00:26:14.444 "low_priority_weight": 0, 00:26:14.444 "medium_priority_weight": 0, 00:26:14.444 "high_priority_weight": 0, 00:26:14.444 "nvme_adminq_poll_period_us": 10000, 00:26:14.444 "nvme_ioq_poll_period_us": 0, 00:26:14.444 "io_queue_requests": 0, 00:26:14.444 "delay_cmd_submit": true, 00:26:14.444 "transport_retry_count": 4, 00:26:14.444 "bdev_retry_count": 3, 00:26:14.444 "transport_ack_timeout": 0, 00:26:14.444 "ctrlr_loss_timeout_sec": 0, 00:26:14.444 "reconnect_delay_sec": 0, 00:26:14.444 "fast_io_fail_timeout_sec": 0, 00:26:14.444 "disable_auto_failback": false, 00:26:14.444 "generate_uuids": false, 00:26:14.444 "transport_tos": 0, 00:26:14.444 "nvme_error_stat": false, 00:26:14.444 "rdma_srq_size": 0, 00:26:14.444 "io_path_stat": false, 00:26:14.444 "allow_accel_sequence": false, 00:26:14.444 "rdma_max_cq_size": 0, 00:26:14.444 "rdma_cm_event_timeout_ms": 0, 00:26:14.444 "dhchap_digests": [ 00:26:14.444 "sha256", 00:26:14.444 "sha384", 00:26:14.444 "sha512" 00:26:14.444 ], 00:26:14.444 "dhchap_dhgroups": [ 00:26:14.444 "null", 00:26:14.444 "ffdhe2048", 00:26:14.444 "ffdhe3072", 00:26:14.444 "ffdhe4096", 00:26:14.444 "ffdhe6144", 00:26:14.444 "ffdhe8192" 00:26:14.444 ] 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_nvme_set_hotplug", 00:26:14.444 "params": { 00:26:14.444 "period_us": 100000, 00:26:14.444 "enable": false 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_malloc_create", 00:26:14.444 "params": { 00:26:14.444 "name": "malloc0", 00:26:14.444 "num_blocks": 8192, 00:26:14.444 "block_size": 4096, 00:26:14.444 "physical_block_size": 4096, 00:26:14.444 "uuid": "f92ea103-ef2f-4ac6-b152-37e557beb96a", 00:26:14.444 "optimal_io_boundary": 0 00:26:14.444 } 00:26:14.444 }, 00:26:14.444 { 00:26:14.444 "method": "bdev_wait_for_examine" 00:26:14.444 } 00:26:14.445 ] 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "subsystem": "nbd", 00:26:14.445 "config": [] 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "subsystem": "scheduler", 00:26:14.445 "config": [ 00:26:14.445 { 00:26:14.445 "method": "framework_set_scheduler", 00:26:14.445 "params": { 00:26:14.445 "name": "static" 00:26:14.445 } 00:26:14.445 } 00:26:14.445 ] 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "subsystem": "nvmf", 00:26:14.445 "config": [ 00:26:14.445 { 00:26:14.445 "method": "nvmf_set_config", 00:26:14.445 "params": { 00:26:14.445 "discovery_filter": "match_any", 00:26:14.445 "admin_cmd_passthru": { 00:26:14.445 "identify_ctrlr": false 00:26:14.445 } 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_set_max_subsystems", 00:26:14.445 "params": { 00:26:14.445 "max_subsystems": 1024 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_set_crdt", 00:26:14.445 "params": { 00:26:14.445 "crdt1": 0, 00:26:14.445 "crdt2": 0, 00:26:14.445 "crdt3": 0 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_create_transport", 00:26:14.445 "params": { 00:26:14.445 "trtype": "TCP", 00:26:14.445 "max_queue_depth": 128, 00:26:14.445 "max_io_qpairs_per_ctrlr": 127, 00:26:14.445 "in_capsule_data_size": 4096, 00:26:14.445 "max_io_size": 131072, 00:26:14.445 "io_unit_size": 131072, 00:26:14.445 "max_aq_depth": 128, 00:26:14.445 "num_shared_buffers": 511, 00:26:14.445 "buf_cache_size": 4294967295, 00:26:14.445 "dif_insert_or_strip": false, 00:26:14.445 "zcopy": false, 00:26:14.445 "c2h_success": false, 00:26:14.445 "sock_priority": 0, 00:26:14.445 "abort_timeout_sec": 1, 00:26:14.445 "ack_timeout": 0, 00:26:14.445 "data_wr_pool_size": 0 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_create_subsystem", 00:26:14.445 "params": { 00:26:14.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.445 "allow_any_host": false, 00:26:14.445 "serial_number": "SPDK00000000000001", 00:26:14.445 "model_number": "SPDK bdev Controller", 00:26:14.445 "max_namespaces": 10, 00:26:14.445 "min_cntlid": 1, 00:26:14.445 "max_cntlid": 65519, 00:26:14.445 "ana_reporting": false 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_subsystem_add_host", 00:26:14.445 "params": { 00:26:14.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.445 "host": "nqn.2016-06.io.spdk:host1", 00:26:14.445 "psk": "/tmp/tmp.JPBYj9wOX0" 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_subsystem_add_ns", 00:26:14.445 "params": { 00:26:14.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.445 "namespace": { 00:26:14.445 "nsid": 1, 00:26:14.445 "bdev_name": "malloc0", 00:26:14.445 "nguid": "F92EA103EF2F4AC6B15237E557BEB96A", 00:26:14.445 "uuid": "f92ea103-ef2f-4ac6-b152-37e557beb96a", 00:26:14.445 "no_auto_visible": false 00:26:14.445 } 00:26:14.445 } 00:26:14.445 }, 00:26:14.445 { 00:26:14.445 "method": "nvmf_subsystem_add_listener", 00:26:14.445 "params": { 00:26:14.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.445 "listen_address": { 00:26:14.445 "trtype": "TCP", 00:26:14.445 "adrfam": "IPv4", 00:26:14.445 "traddr": "10.0.0.2", 00:26:14.445 "trsvcid": "4420" 00:26:14.445 }, 00:26:14.445 "secure_channel": true 00:26:14.445 } 00:26:14.445 } 00:26:14.445 ] 00:26:14.445 } 00:26:14.445 ] 00:26:14.445 }' 00:26:14.445 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:26:15.011 "subsystems": [ 00:26:15.011 { 00:26:15.011 "subsystem": "keyring", 00:26:15.011 "config": [] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "iobuf", 00:26:15.011 "config": [ 00:26:15.011 { 00:26:15.011 "method": "iobuf_set_options", 00:26:15.011 "params": { 00:26:15.011 "small_pool_count": 8192, 00:26:15.011 "large_pool_count": 1024, 00:26:15.011 "small_bufsize": 8192, 00:26:15.011 "large_bufsize": 135168 00:26:15.011 } 00:26:15.011 } 00:26:15.011 ] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "sock", 00:26:15.011 "config": [ 00:26:15.011 { 00:26:15.011 "method": "sock_set_default_impl", 00:26:15.011 "params": { 00:26:15.011 "impl_name": "posix" 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "sock_impl_set_options", 00:26:15.011 "params": { 00:26:15.011 "impl_name": "ssl", 00:26:15.011 "recv_buf_size": 4096, 00:26:15.011 "send_buf_size": 4096, 00:26:15.011 "enable_recv_pipe": true, 00:26:15.011 "enable_quickack": false, 00:26:15.011 "enable_placement_id": 0, 00:26:15.011 "enable_zerocopy_send_server": true, 00:26:15.011 "enable_zerocopy_send_client": false, 00:26:15.011 "zerocopy_threshold": 0, 00:26:15.011 "tls_version": 0, 00:26:15.011 "enable_ktls": false 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "sock_impl_set_options", 00:26:15.011 "params": { 00:26:15.011 "impl_name": "posix", 00:26:15.011 "recv_buf_size": 2097152, 00:26:15.011 "send_buf_size": 2097152, 00:26:15.011 "enable_recv_pipe": true, 00:26:15.011 "enable_quickack": false, 00:26:15.011 "enable_placement_id": 0, 00:26:15.011 "enable_zerocopy_send_server": true, 00:26:15.011 "enable_zerocopy_send_client": false, 00:26:15.011 "zerocopy_threshold": 0, 00:26:15.011 "tls_version": 0, 00:26:15.011 "enable_ktls": false 00:26:15.011 } 00:26:15.011 } 00:26:15.011 ] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "vmd", 00:26:15.011 "config": [] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "accel", 00:26:15.011 "config": [ 00:26:15.011 { 00:26:15.011 "method": "accel_set_options", 00:26:15.011 "params": { 00:26:15.011 "small_cache_size": 128, 00:26:15.011 "large_cache_size": 16, 00:26:15.011 "task_count": 2048, 00:26:15.011 "sequence_count": 2048, 00:26:15.011 "buf_count": 2048 00:26:15.011 } 00:26:15.011 } 00:26:15.011 ] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "bdev", 00:26:15.011 "config": [ 00:26:15.011 { 00:26:15.011 "method": "bdev_set_options", 00:26:15.011 "params": { 00:26:15.011 "bdev_io_pool_size": 65535, 00:26:15.011 "bdev_io_cache_size": 256, 00:26:15.011 "bdev_auto_examine": true, 00:26:15.011 "iobuf_small_cache_size": 128, 00:26:15.011 "iobuf_large_cache_size": 16 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_raid_set_options", 00:26:15.011 "params": { 00:26:15.011 "process_window_size_kb": 1024, 00:26:15.011 "process_max_bandwidth_mb_sec": 0 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_iscsi_set_options", 00:26:15.011 "params": { 00:26:15.011 "timeout_sec": 30 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_nvme_set_options", 00:26:15.011 "params": { 00:26:15.011 "action_on_timeout": "none", 00:26:15.011 "timeout_us": 0, 00:26:15.011 "timeout_admin_us": 0, 00:26:15.011 "keep_alive_timeout_ms": 10000, 00:26:15.011 "arbitration_burst": 0, 00:26:15.011 "low_priority_weight": 0, 00:26:15.011 "medium_priority_weight": 0, 00:26:15.011 "high_priority_weight": 0, 00:26:15.011 "nvme_adminq_poll_period_us": 10000, 00:26:15.011 "nvme_ioq_poll_period_us": 0, 00:26:15.011 "io_queue_requests": 512, 00:26:15.011 "delay_cmd_submit": true, 00:26:15.011 "transport_retry_count": 4, 00:26:15.011 "bdev_retry_count": 3, 00:26:15.011 "transport_ack_timeout": 0, 00:26:15.011 "ctrlr_loss_timeout_sec": 0, 00:26:15.011 "reconnect_delay_sec": 0, 00:26:15.011 "fast_io_fail_timeout_sec": 0, 00:26:15.011 "disable_auto_failback": false, 00:26:15.011 "generate_uuids": false, 00:26:15.011 "transport_tos": 0, 00:26:15.011 "nvme_error_stat": false, 00:26:15.011 "rdma_srq_size": 0, 00:26:15.011 "io_path_stat": false, 00:26:15.011 "allow_accel_sequence": false, 00:26:15.011 "rdma_max_cq_size": 0, 00:26:15.011 "rdma_cm_event_timeout_ms": 0, 00:26:15.011 "dhchap_digests": [ 00:26:15.011 "sha256", 00:26:15.011 "sha384", 00:26:15.011 "sha512" 00:26:15.011 ], 00:26:15.011 "dhchap_dhgroups": [ 00:26:15.011 "null", 00:26:15.011 "ffdhe2048", 00:26:15.011 "ffdhe3072", 00:26:15.011 "ffdhe4096", 00:26:15.011 "ffdhe6144", 00:26:15.011 "ffdhe8192" 00:26:15.011 ] 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_nvme_attach_controller", 00:26:15.011 "params": { 00:26:15.011 "name": "TLSTEST", 00:26:15.011 "trtype": "TCP", 00:26:15.011 "adrfam": "IPv4", 00:26:15.011 "traddr": "10.0.0.2", 00:26:15.011 "trsvcid": "4420", 00:26:15.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.011 "prchk_reftag": false, 00:26:15.011 "prchk_guard": false, 00:26:15.011 "ctrlr_loss_timeout_sec": 0, 00:26:15.011 "reconnect_delay_sec": 0, 00:26:15.011 "fast_io_fail_timeout_sec": 0, 00:26:15.011 "psk": "/tmp/tmp.JPBYj9wOX0", 00:26:15.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.011 "hdgst": false, 00:26:15.011 "ddgst": false 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_nvme_set_hotplug", 00:26:15.011 "params": { 00:26:15.011 "period_us": 100000, 00:26:15.011 "enable": false 00:26:15.011 } 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "method": "bdev_wait_for_examine" 00:26:15.011 } 00:26:15.011 ] 00:26:15.011 }, 00:26:15.011 { 00:26:15.011 "subsystem": "nbd", 00:26:15.011 "config": [] 00:26:15.011 } 00:26:15.011 ] 00:26:15.011 }' 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4161873 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4161873 ']' 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4161873 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161873 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161873' 00:26:15.011 killing process with pid 4161873 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4161873 00:26:15.011 Received shutdown signal, test time was about 10.000000 seconds 00:26:15.011 00:26:15.011 Latency(us) 00:26:15.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.011 =================================================================================================================== 00:26:15.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:15.011 [2024-07-21 08:22:24.387117] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:15.011 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4161873 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4161645 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4161645 ']' 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4161645 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.012 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161645 00:26:15.271 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161645' 00:26:15.272 killing process with pid 4161645 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4161645 00:26:15.272 [2024-07-21 08:22:24.641723] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4161645 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.272 08:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:26:15.272 "subsystems": [ 00:26:15.272 { 00:26:15.272 "subsystem": "keyring", 00:26:15.272 "config": [] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "iobuf", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "iobuf_set_options", 00:26:15.272 "params": { 00:26:15.272 "small_pool_count": 8192, 00:26:15.272 "large_pool_count": 1024, 00:26:15.272 "small_bufsize": 8192, 00:26:15.272 "large_bufsize": 135168 00:26:15.272 } 00:26:15.272 } 00:26:15.272 ] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "sock", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "sock_set_default_impl", 00:26:15.272 "params": { 00:26:15.272 "impl_name": "posix" 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "sock_impl_set_options", 00:26:15.272 "params": { 00:26:15.272 "impl_name": "ssl", 00:26:15.272 "recv_buf_size": 4096, 00:26:15.272 "send_buf_size": 4096, 00:26:15.272 "enable_recv_pipe": true, 00:26:15.272 "enable_quickack": false, 00:26:15.272 "enable_placement_id": 0, 00:26:15.272 "enable_zerocopy_send_server": true, 00:26:15.272 "enable_zerocopy_send_client": false, 00:26:15.272 "zerocopy_threshold": 0, 00:26:15.272 "tls_version": 0, 00:26:15.272 "enable_ktls": false 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "sock_impl_set_options", 00:26:15.272 "params": { 00:26:15.272 "impl_name": "posix", 00:26:15.272 "recv_buf_size": 2097152, 00:26:15.272 "send_buf_size": 2097152, 00:26:15.272 "enable_recv_pipe": true, 00:26:15.272 "enable_quickack": false, 00:26:15.272 "enable_placement_id": 0, 00:26:15.272 "enable_zerocopy_send_server": true, 00:26:15.272 "enable_zerocopy_send_client": false, 00:26:15.272 "zerocopy_threshold": 0, 00:26:15.272 "tls_version": 0, 00:26:15.272 "enable_ktls": false 00:26:15.272 } 00:26:15.272 } 00:26:15.272 ] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "vmd", 00:26:15.272 "config": [] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "accel", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "accel_set_options", 00:26:15.272 "params": { 00:26:15.272 "small_cache_size": 128, 00:26:15.272 "large_cache_size": 16, 00:26:15.272 "task_count": 2048, 00:26:15.272 "sequence_count": 2048, 00:26:15.272 "buf_count": 2048 00:26:15.272 } 00:26:15.272 } 00:26:15.272 ] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "bdev", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "bdev_set_options", 00:26:15.272 "params": { 00:26:15.272 "bdev_io_pool_size": 65535, 00:26:15.272 "bdev_io_cache_size": 256, 00:26:15.272 "bdev_auto_examine": true, 00:26:15.272 "iobuf_small_cache_size": 128, 00:26:15.272 "iobuf_large_cache_size": 16 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_raid_set_options", 00:26:15.272 "params": { 00:26:15.272 "process_window_size_kb": 1024, 00:26:15.272 "process_max_bandwidth_mb_sec": 0 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_iscsi_set_options", 00:26:15.272 "params": { 00:26:15.272 "timeout_sec": 30 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_nvme_set_options", 00:26:15.272 "params": { 00:26:15.272 "action_on_timeout": "none", 00:26:15.272 "timeout_us": 0, 00:26:15.272 "timeout_admin_us": 0, 00:26:15.272 "keep_alive_timeout_ms": 10000, 00:26:15.272 "arbitration_burst": 0, 00:26:15.272 "low_priority_weight": 0, 00:26:15.272 "medium_priority_weight": 0, 00:26:15.272 "high_priority_weight": 0, 00:26:15.272 "nvme_adminq_poll_period_us": 10000, 00:26:15.272 "nvme_ioq_poll_period_us": 0, 00:26:15.272 "io_queue_requests": 0, 00:26:15.272 "delay_cmd_submit": true, 00:26:15.272 "transport_retry_count": 4, 00:26:15.272 "bdev_retry_count": 3, 00:26:15.272 "transport_ack_timeout": 0, 00:26:15.272 "ctrlr_loss_timeout_sec": 0, 00:26:15.272 "reconnect_delay_sec": 0, 00:26:15.272 "fast_io_fail_timeout_sec": 0, 00:26:15.272 "disable_auto_failback": false, 00:26:15.272 "generate_uuids": false, 00:26:15.272 "transport_tos": 0, 00:26:15.272 "nvme_error_stat": false, 00:26:15.272 "rdma_srq_size": 0, 00:26:15.272 "io_path_stat": false, 00:26:15.272 "allow_accel_sequence": false, 00:26:15.272 "rdma_max_cq_size": 0, 00:26:15.272 "rdma_cm_event_timeout_ms": 0, 00:26:15.272 "dhchap_digests": [ 00:26:15.272 "sha256", 00:26:15.272 "sha384", 00:26:15.272 "sha512" 00:26:15.272 ], 00:26:15.272 "dhchap_dhgroups": [ 00:26:15.272 "null", 00:26:15.272 "ffdhe2048", 00:26:15.272 "ffdhe3072", 00:26:15.272 "ffdhe4096", 00:26:15.272 "ffdhe6144", 00:26:15.272 "ffdhe8192" 00:26:15.272 ] 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_nvme_set_hotplug", 00:26:15.272 "params": { 00:26:15.272 "period_us": 100000, 00:26:15.272 "enable": false 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_malloc_create", 00:26:15.272 "params": { 00:26:15.272 "name": "malloc0", 00:26:15.272 "num_blocks": 8192, 00:26:15.272 "block_size": 4096, 00:26:15.272 "physical_block_size": 4096, 00:26:15.272 "uuid": "f92ea103-ef2f-4ac6-b152-37e557beb96a", 00:26:15.272 "optimal_io_boundary": 0 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "bdev_wait_for_examine" 00:26:15.272 } 00:26:15.272 ] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "nbd", 00:26:15.272 "config": [] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "scheduler", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "framework_set_scheduler", 00:26:15.272 "params": { 00:26:15.272 "name": "static" 00:26:15.272 } 00:26:15.272 } 00:26:15.272 ] 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "subsystem": "nvmf", 00:26:15.272 "config": [ 00:26:15.272 { 00:26:15.272 "method": "nvmf_set_config", 00:26:15.272 "params": { 00:26:15.272 "discovery_filter": "match_any", 00:26:15.272 "admin_cmd_passthru": { 00:26:15.272 "identify_ctrlr": false 00:26:15.272 } 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "nvmf_set_max_subsystems", 00:26:15.272 "params": { 00:26:15.272 "max_subsystems": 1024 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "nvmf_set_crdt", 00:26:15.272 "params": { 00:26:15.272 "crdt1": 0, 00:26:15.272 "crdt2": 0, 00:26:15.272 "crdt3": 0 00:26:15.272 } 00:26:15.272 }, 00:26:15.272 { 00:26:15.272 "method": "nvmf_create_transport", 00:26:15.272 "params": { 00:26:15.272 "trtype": "TCP", 00:26:15.272 "max_queue_depth": 128, 00:26:15.272 "max_io_qpairs_per_ctrlr": 127, 00:26:15.272 "in_capsule_data_size": 4096, 00:26:15.272 "max_io_size": 131072, 00:26:15.272 "io_unit_size": 131072, 00:26:15.272 "max_aq_depth": 128, 00:26:15.272 "num_shared_buffers": 511, 00:26:15.272 "buf_cache_size": 4294967295, 00:26:15.272 "dif_insert_or_strip": false, 00:26:15.272 "zcopy": false, 00:26:15.272 "c2h_success": false, 00:26:15.273 "sock_priority": 0, 00:26:15.273 "abort_timeout_sec": 1, 00:26:15.273 "ack_timeout": 0, 00:26:15.273 "data_wr_pool_size": 0 00:26:15.273 } 00:26:15.273 }, 00:26:15.273 { 00:26:15.273 "method": "nvmf_create_subsystem", 00:26:15.273 "params": { 00:26:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.273 "allow_any_host": false, 00:26:15.273 "serial_number": "SPDK00000000000001", 00:26:15.273 "model_number": "SPDK bdev Controller", 00:26:15.273 "max_namespaces": 10, 00:26:15.273 "min_cntlid": 1, 00:26:15.273 "max_cntlid": 65519, 00:26:15.273 "ana_reporting": false 00:26:15.273 } 00:26:15.273 }, 00:26:15.273 { 00:26:15.273 "method": "nvmf_subsystem_add_host", 00:26:15.273 "params": { 00:26:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.273 "host": "nqn.2016-06.io.spdk:host1", 00:26:15.273 "psk": "/tmp/tmp.JPBYj9wOX0" 00:26:15.273 } 00:26:15.273 }, 00:26:15.273 { 00:26:15.273 "method": "nvmf_subsystem_add_ns", 00:26:15.273 "params": { 00:26:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.273 "namespace": { 00:26:15.273 "nsid": 1, 00:26:15.273 "bdev_name": "malloc0", 00:26:15.273 "nguid": "F92EA103EF2F4AC6B15237E557BEB96A", 00:26:15.273 "uuid": "f92ea103-ef2f-4ac6-b152-37e557beb96a", 00:26:15.273 "no_auto_visible": false 00:26:15.273 } 00:26:15.273 } 00:26:15.273 }, 00:26:15.273 { 00:26:15.273 "method": "nvmf_subsystem_add_listener", 00:26:15.273 "params": { 00:26:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.273 "listen_address": { 00:26:15.273 "trtype": "TCP", 00:26:15.273 "adrfam": "IPv4", 00:26:15.273 "traddr": "10.0.0.2", 00:26:15.273 "trsvcid": "4420" 00:26:15.273 }, 00:26:15.273 "secure_channel": true 00:26:15.273 } 00:26:15.273 } 00:26:15.273 ] 00:26:15.273 } 00:26:15.273 ] 00:26:15.273 }' 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4162085 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4162085 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4162085 ']' 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.273 08:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:15.531 [2024-07-21 08:22:24.944638] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:15.531 [2024-07-21 08:22:24.944728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.531 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.531 [2024-07-21 08:22:25.010222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.531 [2024-07-21 08:22:25.098582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.531 [2024-07-21 08:22:25.098647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.531 [2024-07-21 08:22:25.098663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.531 [2024-07-21 08:22:25.098675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.531 [2024-07-21 08:22:25.098685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.531 [2024-07-21 08:22:25.098766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.789 [2024-07-21 08:22:25.334821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.789 [2024-07-21 08:22:25.367272] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:15.789 [2024-07-21 08:22:25.383333] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:15.789 [2024-07-21 08:22:25.383558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4162238 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4162238 /var/tmp/bdevperf.sock 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4162238 ']' 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:26:16.353 "subsystems": [ 00:26:16.353 { 00:26:16.353 "subsystem": "keyring", 00:26:16.353 "config": [] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "iobuf", 00:26:16.353 "config": [ 00:26:16.353 { 00:26:16.353 "method": "iobuf_set_options", 00:26:16.353 "params": { 00:26:16.353 "small_pool_count": 8192, 00:26:16.353 "large_pool_count": 1024, 00:26:16.353 "small_bufsize": 8192, 00:26:16.353 "large_bufsize": 135168 00:26:16.353 } 00:26:16.353 } 00:26:16.353 ] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "sock", 00:26:16.353 "config": [ 00:26:16.353 { 00:26:16.353 "method": "sock_set_default_impl", 00:26:16.353 "params": { 00:26:16.353 "impl_name": "posix" 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "sock_impl_set_options", 00:26:16.353 "params": { 00:26:16.353 "impl_name": "ssl", 00:26:16.353 "recv_buf_size": 4096, 00:26:16.353 "send_buf_size": 4096, 00:26:16.353 "enable_recv_pipe": true, 00:26:16.353 "enable_quickack": false, 00:26:16.353 "enable_placement_id": 0, 00:26:16.353 "enable_zerocopy_send_server": true, 00:26:16.353 "enable_zerocopy_send_client": false, 00:26:16.353 "zerocopy_threshold": 0, 00:26:16.353 "tls_version": 0, 00:26:16.353 "enable_ktls": false 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "sock_impl_set_options", 00:26:16.353 "params": { 00:26:16.353 "impl_name": "posix", 00:26:16.353 "recv_buf_size": 2097152, 00:26:16.353 "send_buf_size": 2097152, 00:26:16.353 "enable_recv_pipe": true, 00:26:16.353 "enable_quickack": false, 00:26:16.353 "enable_placement_id": 0, 00:26:16.353 "enable_zerocopy_send_server": true, 00:26:16.353 "enable_zerocopy_send_client": false, 00:26:16.353 "zerocopy_threshold": 0, 00:26:16.353 "tls_version": 0, 00:26:16.353 "enable_ktls": false 00:26:16.353 } 00:26:16.353 } 00:26:16.353 ] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "vmd", 00:26:16.353 "config": [] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "accel", 00:26:16.353 "config": [ 00:26:16.353 { 00:26:16.353 "method": "accel_set_options", 00:26:16.353 "params": { 00:26:16.353 "small_cache_size": 128, 00:26:16.353 "large_cache_size": 16, 00:26:16.353 "task_count": 2048, 00:26:16.353 "sequence_count": 2048, 00:26:16.353 "buf_count": 2048 00:26:16.353 } 00:26:16.353 } 00:26:16.353 ] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "bdev", 00:26:16.353 "config": [ 00:26:16.353 { 00:26:16.353 "method": "bdev_set_options", 00:26:16.353 "params": { 00:26:16.353 "bdev_io_pool_size": 65535, 00:26:16.353 "bdev_io_cache_size": 256, 00:26:16.353 "bdev_auto_examine": true, 00:26:16.353 "iobuf_small_cache_size": 128, 00:26:16.353 "iobuf_large_cache_size": 16 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_raid_set_options", 00:26:16.353 "params": { 00:26:16.353 "process_window_size_kb": 1024, 00:26:16.353 "process_max_bandwidth_mb_sec": 0 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_iscsi_set_options", 00:26:16.353 "params": { 00:26:16.353 "timeout_sec": 30 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_nvme_set_options", 00:26:16.353 "params": { 00:26:16.353 "action_on_timeout": "none", 00:26:16.353 "timeout_us": 0, 00:26:16.353 "timeout_admin_us": 0, 00:26:16.353 "keep_alive_timeout_ms": 10000, 00:26:16.353 "arbitration_burst": 0, 00:26:16.353 "low_priority_weight": 0, 00:26:16.353 "medium_priority_weight": 0, 00:26:16.353 "high_priority_weight": 0, 00:26:16.353 "nvme_adminq_poll_period_us": 10000, 00:26:16.353 "nvme_ioq_poll_period_us": 0, 00:26:16.353 "io_queue_requests": 512, 00:26:16.353 "delay_cmd_submit": true, 00:26:16.353 "transport_retry_count": 4, 00:26:16.353 "bdev_retry_count": 3, 00:26:16.353 "transport_ack_timeout": 0, 00:26:16.353 "ctrlr_loss_timeout_sec": 0, 00:26:16.353 "reconnect_delay_sec": 0, 00:26:16.353 "fast_io_fail_timeout_sec": 0, 00:26:16.353 "disable_auto_failback": false, 00:26:16.353 "generate_uuids": false, 00:26:16.353 "transport_tos": 0, 00:26:16.353 "nvme_error_stat": false, 00:26:16.353 "rdma_srq_size": 0, 00:26:16.353 "io_path_stat": false, 00:26:16.353 "allow_accel_sequence": false, 00:26:16.353 "rdma_max_cq_size": 0, 00:26:16.353 "rdma_cm_event_timeout_ms": 0, 00:26:16.353 "dhchap_digests": [ 00:26:16.353 "sha256", 00:26:16.353 "sha384", 00:26:16.353 "sha512" 00:26:16.353 ], 00:26:16.353 "dhchap_dhgroups": [ 00:26:16.353 "null", 00:26:16.353 "ffdhe2048", 00:26:16.353 "ffdhe3072", 00:26:16.353 "ffdhe4096", 00:26:16.353 "ffdhe6144", 00:26:16.353 "ffdhe8192" 00:26:16.353 ] 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_nvme_attach_controller", 00:26:16.353 "params": { 00:26:16.353 "name": "TLSTEST", 00:26:16.353 "trtype": "TCP", 00:26:16.353 "adrfam": "IPv4", 00:26:16.353 "traddr": "10.0.0.2", 00:26:16.353 "trsvcid": "4420", 00:26:16.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.353 "prchk_reftag": false, 00:26:16.353 "prchk_guard": false, 00:26:16.353 "ctrlr_loss_timeout_sec": 0, 00:26:16.353 "reconnect_delay_sec": 0, 00:26:16.353 "fast_io_fail_timeout_sec": 0, 00:26:16.353 "psk": "/tmp/tmp.JPBYj9wOX0", 00:26:16.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.353 "hdgst": false, 00:26:16.353 "ddgst": false 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_nvme_set_hotplug", 00:26:16.353 "params": { 00:26:16.353 "period_us": 100000, 00:26:16.353 "enable": false 00:26:16.353 } 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "method": "bdev_wait_for_examine" 00:26:16.353 } 00:26:16.353 ] 00:26:16.353 }, 00:26:16.353 { 00:26:16.353 "subsystem": "nbd", 00:26:16.353 "config": [] 00:26:16.353 } 00:26:16.353 ] 00:26:16.353 }' 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.353 08:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:16.353 [2024-07-21 08:22:25.944834] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:16.353 [2024-07-21 08:22:25.944926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162238 ] 00:26:16.353 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.610 [2024-07-21 08:22:26.002548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.610 [2024-07-21 08:22:26.092547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.866 [2024-07-21 08:22:26.257800] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:16.866 [2024-07-21 08:22:26.257944] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:17.437 08:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.437 08:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:17.437 08:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:17.437 Running I/O for 10 seconds... 00:26:29.655 00:26:29.655 Latency(us) 00:26:29.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.655 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.655 Verification LBA range: start 0x0 length 0x2000 00:26:29.655 TLSTESTn1 : 10.02 3449.48 13.47 0.00 0.00 37041.87 6019.60 39418.69 00:26:29.655 =================================================================================================================== 00:26:29.655 Total : 3449.48 13.47 0.00 0.00 37041.87 6019.60 39418.69 00:26:29.655 0 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4162238 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4162238 ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4162238 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162238 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162238' 00:26:29.655 killing process with pid 4162238 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4162238 00:26:29.655 Received shutdown signal, test time was about 10.000000 seconds 00:26:29.655 00:26:29.655 Latency(us) 00:26:29.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.655 =================================================================================================================== 00:26:29.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.655 [2024-07-21 08:22:37.134235] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4162238 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4162085 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4162085 ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4162085 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162085 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162085' 00:26:29.655 killing process with pid 4162085 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4162085 00:26:29.655 [2024-07-21 08:22:37.387628] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4162085 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4163560 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4163560 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4163560 ']' 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.655 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.655 [2024-07-21 08:22:37.693359] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:29.655 [2024-07-21 08:22:37.693451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.655 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.655 [2024-07-21 08:22:37.760012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.655 [2024-07-21 08:22:37.848673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.656 [2024-07-21 08:22:37.848729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.656 [2024-07-21 08:22:37.848745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.656 [2024-07-21 08:22:37.848765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.656 [2024-07-21 08:22:37.848778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.656 [2024-07-21 08:22:37.848806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JPBYj9wOX0 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JPBYj9wOX0 00:26:29.656 08:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:29.656 [2024-07-21 08:22:38.217319] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.656 08:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:29.656 08:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:29.656 [2024-07-21 08:22:38.698604] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:29.656 [2024-07-21 08:22:38.698832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.656 08:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:29.656 malloc0 00:26:29.656 08:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:29.656 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JPBYj9wOX0 00:26:29.914 [2024-07-21 08:22:39.452820] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4163845 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4163845 /var/tmp/bdevperf.sock 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4163845 ']' 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:29.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.914 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.914 [2024-07-21 08:22:39.513803] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:29.914 [2024-07-21 08:22:39.513874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163845 ] 00:26:29.914 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.172 [2024-07-21 08:22:39.574753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.172 [2024-07-21 08:22:39.665131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.172 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.172 08:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:30.172 08:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JPBYj9wOX0 00:26:30.430 08:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:30.687 [2024-07-21 08:22:40.266874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:30.944 nvme0n1 00:26:30.944 08:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.944 Running I/O for 1 seconds... 00:26:31.876 00:26:31.876 Latency(us) 00:26:31.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.876 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:31.876 Verification LBA range: start 0x0 length 0x2000 00:26:31.876 nvme0n1 : 1.02 3464.08 13.53 0.00 0.00 36547.76 8883.77 40583.77 00:26:31.876 =================================================================================================================== 00:26:31.876 Total : 3464.08 13.53 0.00 0.00 36547.76 8883.77 40583.77 00:26:31.876 0 00:26:31.876 08:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4163845 00:26:31.876 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4163845 ']' 00:26:31.876 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4163845 00:26:31.876 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:31.876 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4163845 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4163845' 00:26:32.134 killing process with pid 4163845 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4163845 00:26:32.134 Received shutdown signal, test time was about 1.000000 seconds 00:26:32.134 00:26:32.134 Latency(us) 00:26:32.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.134 =================================================================================================================== 00:26:32.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4163845 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4163560 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4163560 ']' 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4163560 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:32.134 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4163560 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4163560' 00:26:32.391 killing process with pid 4163560 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4163560 00:26:32.391 [2024-07-21 08:22:41.790029] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:32.391 08:22:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4163560 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4164130 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4164130 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4164130 ']' 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.649 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.649 [2024-07-21 08:22:42.083285] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:32.649 [2024-07-21 08:22:42.083360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.649 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.649 [2024-07-21 08:22:42.143981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.649 [2024-07-21 08:22:42.231310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.649 [2024-07-21 08:22:42.231362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.649 [2024-07-21 08:22:42.231390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.649 [2024-07-21 08:22:42.231402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.649 [2024-07-21 08:22:42.231412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.649 [2024-07-21 08:22:42.231436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.960 [2024-07-21 08:22:42.368753] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.960 malloc0 00:26:32.960 [2024-07-21 08:22:42.400428] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:32.960 [2024-07-21 08:22:42.400710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=4164161 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 4164161 /var/tmp/bdevperf.sock 00:26:32.960 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4164161 ']' 00:26:32.961 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.961 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.961 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.961 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.961 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.961 [2024-07-21 08:22:42.471570] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:32.961 [2024-07-21 08:22:42.471682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164161 ] 00:26:32.961 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.961 [2024-07-21 08:22:42.533894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.218 [2024-07-21 08:22:42.625473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.218 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.218 08:22:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:33.218 08:22:42 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JPBYj9wOX0 00:26:33.475 08:22:43 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:33.731 [2024-07-21 08:22:43.301670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:33.988 nvme0n1 00:26:33.988 08:22:43 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:33.988 Running I/O for 1 seconds... 00:26:34.920 00:26:34.920 Latency(us) 00:26:34.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.920 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:34.920 Verification LBA range: start 0x0 length 0x2000 00:26:34.920 nvme0n1 : 1.02 3246.44 12.68 0.00 0.00 39026.38 6602.15 30486.38 00:26:34.920 =================================================================================================================== 00:26:34.920 Total : 3246.44 12.68 0.00 0.00 39026.38 6602.15 30486.38 00:26:34.920 0 00:26:34.920 08:22:44 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:26:34.920 08:22:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.920 08:22:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:35.177 08:22:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.177 08:22:44 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:26:35.177 "subsystems": [ 00:26:35.177 { 00:26:35.177 "subsystem": "keyring", 00:26:35.177 "config": [ 00:26:35.177 { 00:26:35.177 "method": "keyring_file_add_key", 00:26:35.178 "params": { 00:26:35.178 "name": "key0", 00:26:35.178 "path": "/tmp/tmp.JPBYj9wOX0" 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "iobuf", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "iobuf_set_options", 00:26:35.178 "params": { 00:26:35.178 "small_pool_count": 8192, 00:26:35.178 "large_pool_count": 1024, 00:26:35.178 "small_bufsize": 8192, 00:26:35.178 "large_bufsize": 135168 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "sock", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "sock_set_default_impl", 00:26:35.178 "params": { 00:26:35.178 "impl_name": "posix" 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "sock_impl_set_options", 00:26:35.178 "params": { 00:26:35.178 "impl_name": "ssl", 00:26:35.178 "recv_buf_size": 4096, 00:26:35.178 "send_buf_size": 4096, 00:26:35.178 "enable_recv_pipe": true, 00:26:35.178 "enable_quickack": false, 00:26:35.178 "enable_placement_id": 0, 00:26:35.178 "enable_zerocopy_send_server": true, 00:26:35.178 "enable_zerocopy_send_client": false, 00:26:35.178 "zerocopy_threshold": 0, 00:26:35.178 "tls_version": 0, 00:26:35.178 "enable_ktls": false 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "sock_impl_set_options", 00:26:35.178 "params": { 00:26:35.178 "impl_name": "posix", 00:26:35.178 "recv_buf_size": 2097152, 00:26:35.178 "send_buf_size": 2097152, 00:26:35.178 "enable_recv_pipe": true, 00:26:35.178 "enable_quickack": false, 00:26:35.178 "enable_placement_id": 0, 00:26:35.178 "enable_zerocopy_send_server": true, 00:26:35.178 "enable_zerocopy_send_client": false, 00:26:35.178 "zerocopy_threshold": 0, 00:26:35.178 "tls_version": 0, 00:26:35.178 "enable_ktls": false 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "vmd", 00:26:35.178 "config": [] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "accel", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "accel_set_options", 00:26:35.178 "params": { 00:26:35.178 "small_cache_size": 128, 00:26:35.178 "large_cache_size": 16, 00:26:35.178 "task_count": 2048, 00:26:35.178 "sequence_count": 2048, 00:26:35.178 "buf_count": 2048 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "bdev", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "bdev_set_options", 00:26:35.178 "params": { 00:26:35.178 "bdev_io_pool_size": 65535, 00:26:35.178 "bdev_io_cache_size": 256, 00:26:35.178 "bdev_auto_examine": true, 00:26:35.178 "iobuf_small_cache_size": 128, 00:26:35.178 "iobuf_large_cache_size": 16 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_raid_set_options", 00:26:35.178 "params": { 00:26:35.178 "process_window_size_kb": 1024, 00:26:35.178 "process_max_bandwidth_mb_sec": 0 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_iscsi_set_options", 00:26:35.178 "params": { 00:26:35.178 "timeout_sec": 30 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_nvme_set_options", 00:26:35.178 "params": { 00:26:35.178 "action_on_timeout": "none", 00:26:35.178 "timeout_us": 0, 00:26:35.178 "timeout_admin_us": 0, 00:26:35.178 "keep_alive_timeout_ms": 10000, 00:26:35.178 "arbitration_burst": 0, 00:26:35.178 "low_priority_weight": 0, 00:26:35.178 "medium_priority_weight": 0, 00:26:35.178 "high_priority_weight": 0, 00:26:35.178 "nvme_adminq_poll_period_us": 10000, 00:26:35.178 "nvme_ioq_poll_period_us": 0, 00:26:35.178 "io_queue_requests": 0, 00:26:35.178 "delay_cmd_submit": true, 00:26:35.178 "transport_retry_count": 4, 00:26:35.178 "bdev_retry_count": 3, 00:26:35.178 "transport_ack_timeout": 0, 00:26:35.178 "ctrlr_loss_timeout_sec": 0, 00:26:35.178 "reconnect_delay_sec": 0, 00:26:35.178 "fast_io_fail_timeout_sec": 0, 00:26:35.178 "disable_auto_failback": false, 00:26:35.178 "generate_uuids": false, 00:26:35.178 "transport_tos": 0, 00:26:35.178 "nvme_error_stat": false, 00:26:35.178 "rdma_srq_size": 0, 00:26:35.178 "io_path_stat": false, 00:26:35.178 "allow_accel_sequence": false, 00:26:35.178 "rdma_max_cq_size": 0, 00:26:35.178 "rdma_cm_event_timeout_ms": 0, 00:26:35.178 "dhchap_digests": [ 00:26:35.178 "sha256", 00:26:35.178 "sha384", 00:26:35.178 "sha512" 00:26:35.178 ], 00:26:35.178 "dhchap_dhgroups": [ 00:26:35.178 "null", 00:26:35.178 "ffdhe2048", 00:26:35.178 "ffdhe3072", 00:26:35.178 "ffdhe4096", 00:26:35.178 "ffdhe6144", 00:26:35.178 "ffdhe8192" 00:26:35.178 ] 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_nvme_set_hotplug", 00:26:35.178 "params": { 00:26:35.178 "period_us": 100000, 00:26:35.178 "enable": false 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_malloc_create", 00:26:35.178 "params": { 00:26:35.178 "name": "malloc0", 00:26:35.178 "num_blocks": 8192, 00:26:35.178 "block_size": 4096, 00:26:35.178 "physical_block_size": 4096, 00:26:35.178 "uuid": "8dcefb97-bf84-4084-8a7f-38ab75594fa4", 00:26:35.178 "optimal_io_boundary": 0 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "bdev_wait_for_examine" 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "nbd", 00:26:35.178 "config": [] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "scheduler", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "framework_set_scheduler", 00:26:35.178 "params": { 00:26:35.178 "name": "static" 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "subsystem": "nvmf", 00:26:35.178 "config": [ 00:26:35.178 { 00:26:35.178 "method": "nvmf_set_config", 00:26:35.178 "params": { 00:26:35.178 "discovery_filter": "match_any", 00:26:35.178 "admin_cmd_passthru": { 00:26:35.178 "identify_ctrlr": false 00:26:35.178 } 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_set_max_subsystems", 00:26:35.178 "params": { 00:26:35.178 "max_subsystems": 1024 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_set_crdt", 00:26:35.178 "params": { 00:26:35.178 "crdt1": 0, 00:26:35.178 "crdt2": 0, 00:26:35.178 "crdt3": 0 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_create_transport", 00:26:35.178 "params": { 00:26:35.178 "trtype": "TCP", 00:26:35.178 "max_queue_depth": 128, 00:26:35.178 "max_io_qpairs_per_ctrlr": 127, 00:26:35.178 "in_capsule_data_size": 4096, 00:26:35.178 "max_io_size": 131072, 00:26:35.178 "io_unit_size": 131072, 00:26:35.178 "max_aq_depth": 128, 00:26:35.178 "num_shared_buffers": 511, 00:26:35.178 "buf_cache_size": 4294967295, 00:26:35.178 "dif_insert_or_strip": false, 00:26:35.178 "zcopy": false, 00:26:35.178 "c2h_success": false, 00:26:35.178 "sock_priority": 0, 00:26:35.178 "abort_timeout_sec": 1, 00:26:35.178 "ack_timeout": 0, 00:26:35.178 "data_wr_pool_size": 0 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_create_subsystem", 00:26:35.178 "params": { 00:26:35.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.178 "allow_any_host": false, 00:26:35.178 "serial_number": "00000000000000000000", 00:26:35.178 "model_number": "SPDK bdev Controller", 00:26:35.178 "max_namespaces": 32, 00:26:35.178 "min_cntlid": 1, 00:26:35.178 "max_cntlid": 65519, 00:26:35.178 "ana_reporting": false 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_subsystem_add_host", 00:26:35.178 "params": { 00:26:35.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.178 "host": "nqn.2016-06.io.spdk:host1", 00:26:35.178 "psk": "key0" 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_subsystem_add_ns", 00:26:35.178 "params": { 00:26:35.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.178 "namespace": { 00:26:35.178 "nsid": 1, 00:26:35.178 "bdev_name": "malloc0", 00:26:35.178 "nguid": "8DCEFB97BF8440848A7F38AB75594FA4", 00:26:35.178 "uuid": "8dcefb97-bf84-4084-8a7f-38ab75594fa4", 00:26:35.178 "no_auto_visible": false 00:26:35.178 } 00:26:35.178 } 00:26:35.178 }, 00:26:35.178 { 00:26:35.178 "method": "nvmf_subsystem_add_listener", 00:26:35.178 "params": { 00:26:35.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.178 "listen_address": { 00:26:35.178 "trtype": "TCP", 00:26:35.178 "adrfam": "IPv4", 00:26:35.178 "traddr": "10.0.0.2", 00:26:35.178 "trsvcid": "4420" 00:26:35.178 }, 00:26:35.178 "secure_channel": false, 00:26:35.178 "sock_impl": "ssl" 00:26:35.178 } 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 } 00:26:35.178 ] 00:26:35.178 }' 00:26:35.178 08:22:44 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:35.437 08:22:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:26:35.437 "subsystems": [ 00:26:35.437 { 00:26:35.437 "subsystem": "keyring", 00:26:35.437 "config": [ 00:26:35.437 { 00:26:35.437 "method": "keyring_file_add_key", 00:26:35.437 "params": { 00:26:35.437 "name": "key0", 00:26:35.437 "path": "/tmp/tmp.JPBYj9wOX0" 00:26:35.437 } 00:26:35.437 } 00:26:35.437 ] 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "subsystem": "iobuf", 00:26:35.437 "config": [ 00:26:35.437 { 00:26:35.437 "method": "iobuf_set_options", 00:26:35.437 "params": { 00:26:35.437 "small_pool_count": 8192, 00:26:35.437 "large_pool_count": 1024, 00:26:35.437 "small_bufsize": 8192, 00:26:35.437 "large_bufsize": 135168 00:26:35.437 } 00:26:35.437 } 00:26:35.437 ] 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "subsystem": "sock", 00:26:35.437 "config": [ 00:26:35.437 { 00:26:35.437 "method": "sock_set_default_impl", 00:26:35.437 "params": { 00:26:35.437 "impl_name": "posix" 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "sock_impl_set_options", 00:26:35.437 "params": { 00:26:35.437 "impl_name": "ssl", 00:26:35.437 "recv_buf_size": 4096, 00:26:35.437 "send_buf_size": 4096, 00:26:35.437 "enable_recv_pipe": true, 00:26:35.437 "enable_quickack": false, 00:26:35.437 "enable_placement_id": 0, 00:26:35.437 "enable_zerocopy_send_server": true, 00:26:35.437 "enable_zerocopy_send_client": false, 00:26:35.437 "zerocopy_threshold": 0, 00:26:35.437 "tls_version": 0, 00:26:35.437 "enable_ktls": false 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "sock_impl_set_options", 00:26:35.437 "params": { 00:26:35.437 "impl_name": "posix", 00:26:35.437 "recv_buf_size": 2097152, 00:26:35.437 "send_buf_size": 2097152, 00:26:35.437 "enable_recv_pipe": true, 00:26:35.437 "enable_quickack": false, 00:26:35.437 "enable_placement_id": 0, 00:26:35.437 "enable_zerocopy_send_server": true, 00:26:35.437 "enable_zerocopy_send_client": false, 00:26:35.437 "zerocopy_threshold": 0, 00:26:35.437 "tls_version": 0, 00:26:35.437 "enable_ktls": false 00:26:35.437 } 00:26:35.437 } 00:26:35.437 ] 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "subsystem": "vmd", 00:26:35.437 "config": [] 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "subsystem": "accel", 00:26:35.437 "config": [ 00:26:35.437 { 00:26:35.437 "method": "accel_set_options", 00:26:35.437 "params": { 00:26:35.437 "small_cache_size": 128, 00:26:35.437 "large_cache_size": 16, 00:26:35.437 "task_count": 2048, 00:26:35.437 "sequence_count": 2048, 00:26:35.437 "buf_count": 2048 00:26:35.437 } 00:26:35.437 } 00:26:35.437 ] 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "subsystem": "bdev", 00:26:35.437 "config": [ 00:26:35.437 { 00:26:35.437 "method": "bdev_set_options", 00:26:35.437 "params": { 00:26:35.437 "bdev_io_pool_size": 65535, 00:26:35.437 "bdev_io_cache_size": 256, 00:26:35.437 "bdev_auto_examine": true, 00:26:35.437 "iobuf_small_cache_size": 128, 00:26:35.437 "iobuf_large_cache_size": 16 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "bdev_raid_set_options", 00:26:35.437 "params": { 00:26:35.437 "process_window_size_kb": 1024, 00:26:35.437 "process_max_bandwidth_mb_sec": 0 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "bdev_iscsi_set_options", 00:26:35.437 "params": { 00:26:35.437 "timeout_sec": 30 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "bdev_nvme_set_options", 00:26:35.437 "params": { 00:26:35.437 "action_on_timeout": "none", 00:26:35.437 "timeout_us": 0, 00:26:35.437 "timeout_admin_us": 0, 00:26:35.437 "keep_alive_timeout_ms": 10000, 00:26:35.437 "arbitration_burst": 0, 00:26:35.437 "low_priority_weight": 0, 00:26:35.437 "medium_priority_weight": 0, 00:26:35.437 "high_priority_weight": 0, 00:26:35.437 "nvme_adminq_poll_period_us": 10000, 00:26:35.437 "nvme_ioq_poll_period_us": 0, 00:26:35.437 "io_queue_requests": 512, 00:26:35.437 "delay_cmd_submit": true, 00:26:35.437 "transport_retry_count": 4, 00:26:35.437 "bdev_retry_count": 3, 00:26:35.437 "transport_ack_timeout": 0, 00:26:35.437 "ctrlr_loss_timeout_sec": 0, 00:26:35.437 "reconnect_delay_sec": 0, 00:26:35.437 "fast_io_fail_timeout_sec": 0, 00:26:35.437 "disable_auto_failback": false, 00:26:35.437 "generate_uuids": false, 00:26:35.437 "transport_tos": 0, 00:26:35.437 "nvme_error_stat": false, 00:26:35.437 "rdma_srq_size": 0, 00:26:35.437 "io_path_stat": false, 00:26:35.437 "allow_accel_sequence": false, 00:26:35.437 "rdma_max_cq_size": 0, 00:26:35.437 "rdma_cm_event_timeout_ms": 0, 00:26:35.437 "dhchap_digests": [ 00:26:35.437 "sha256", 00:26:35.437 "sha384", 00:26:35.437 "sha512" 00:26:35.437 ], 00:26:35.437 "dhchap_dhgroups": [ 00:26:35.437 "null", 00:26:35.437 "ffdhe2048", 00:26:35.437 "ffdhe3072", 00:26:35.437 "ffdhe4096", 00:26:35.437 "ffdhe6144", 00:26:35.437 "ffdhe8192" 00:26:35.437 ] 00:26:35.437 } 00:26:35.437 }, 00:26:35.437 { 00:26:35.437 "method": "bdev_nvme_attach_controller", 00:26:35.437 "params": { 00:26:35.437 "name": "nvme0", 00:26:35.437 "trtype": "TCP", 00:26:35.437 "adrfam": "IPv4", 00:26:35.437 "traddr": "10.0.0.2", 00:26:35.437 "trsvcid": "4420", 00:26:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.437 "prchk_reftag": false, 00:26:35.437 "prchk_guard": false, 00:26:35.437 "ctrlr_loss_timeout_sec": 0, 00:26:35.437 "reconnect_delay_sec": 0, 00:26:35.437 "fast_io_fail_timeout_sec": 0, 00:26:35.437 "psk": "key0", 00:26:35.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.438 "hdgst": false, 00:26:35.438 "ddgst": false 00:26:35.438 } 00:26:35.438 }, 00:26:35.438 { 00:26:35.438 "method": "bdev_nvme_set_hotplug", 00:26:35.438 "params": { 00:26:35.438 "period_us": 100000, 00:26:35.438 "enable": false 00:26:35.438 } 00:26:35.438 }, 00:26:35.438 { 00:26:35.438 "method": "bdev_enable_histogram", 00:26:35.438 "params": { 00:26:35.438 "name": "nvme0n1", 00:26:35.438 "enable": true 00:26:35.438 } 00:26:35.438 }, 00:26:35.438 { 00:26:35.438 "method": "bdev_wait_for_examine" 00:26:35.438 } 00:26:35.438 ] 00:26:35.438 }, 00:26:35.438 { 00:26:35.438 "subsystem": "nbd", 00:26:35.438 "config": [] 00:26:35.438 } 00:26:35.438 ] 00:26:35.438 }' 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 4164161 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4164161 ']' 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4164161 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164161 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164161' 00:26:35.438 killing process with pid 4164161 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4164161 00:26:35.438 Received shutdown signal, test time was about 1.000000 seconds 00:26:35.438 00:26:35.438 Latency(us) 00:26:35.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.438 =================================================================================================================== 00:26:35.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.438 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4164161 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 4164130 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4164130 ']' 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4164130 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164130 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164130' 00:26:35.695 killing process with pid 4164130 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4164130 00:26:35.695 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4164130 00:26:35.954 08:22:45 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:26:35.954 08:22:45 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:26:35.954 "subsystems": [ 00:26:35.954 { 00:26:35.954 "subsystem": "keyring", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "keyring_file_add_key", 00:26:35.954 "params": { 00:26:35.954 "name": "key0", 00:26:35.954 "path": "/tmp/tmp.JPBYj9wOX0" 00:26:35.954 } 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "iobuf", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "iobuf_set_options", 00:26:35.954 "params": { 00:26:35.954 "small_pool_count": 8192, 00:26:35.954 "large_pool_count": 1024, 00:26:35.954 "small_bufsize": 8192, 00:26:35.954 "large_bufsize": 135168 00:26:35.954 } 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "sock", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "sock_set_default_impl", 00:26:35.954 "params": { 00:26:35.954 "impl_name": "posix" 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "sock_impl_set_options", 00:26:35.954 "params": { 00:26:35.954 "impl_name": "ssl", 00:26:35.954 "recv_buf_size": 4096, 00:26:35.954 "send_buf_size": 4096, 00:26:35.954 "enable_recv_pipe": true, 00:26:35.954 "enable_quickack": false, 00:26:35.954 "enable_placement_id": 0, 00:26:35.954 "enable_zerocopy_send_server": true, 00:26:35.954 "enable_zerocopy_send_client": false, 00:26:35.954 "zerocopy_threshold": 0, 00:26:35.954 "tls_version": 0, 00:26:35.954 "enable_ktls": false 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "sock_impl_set_options", 00:26:35.954 "params": { 00:26:35.954 "impl_name": "posix", 00:26:35.954 "recv_buf_size": 2097152, 00:26:35.954 "send_buf_size": 2097152, 00:26:35.954 "enable_recv_pipe": true, 00:26:35.954 "enable_quickack": false, 00:26:35.954 "enable_placement_id": 0, 00:26:35.954 "enable_zerocopy_send_server": true, 00:26:35.954 "enable_zerocopy_send_client": false, 00:26:35.954 "zerocopy_threshold": 0, 00:26:35.954 "tls_version": 0, 00:26:35.954 "enable_ktls": false 00:26:35.954 } 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "vmd", 00:26:35.954 "config": [] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "accel", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "accel_set_options", 00:26:35.954 "params": { 00:26:35.954 "small_cache_size": 128, 00:26:35.954 "large_cache_size": 16, 00:26:35.954 "task_count": 2048, 00:26:35.954 "sequence_count": 2048, 00:26:35.954 "buf_count": 2048 00:26:35.954 } 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "bdev", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "bdev_set_options", 00:26:35.954 "params": { 00:26:35.954 "bdev_io_pool_size": 65535, 00:26:35.954 "bdev_io_cache_size": 256, 00:26:35.954 "bdev_auto_examine": true, 00:26:35.954 "iobuf_small_cache_size": 128, 00:26:35.954 "iobuf_large_cache_size": 16 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_raid_set_options", 00:26:35.954 "params": { 00:26:35.954 "process_window_size_kb": 1024, 00:26:35.954 "process_max_bandwidth_mb_sec": 0 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_iscsi_set_options", 00:26:35.954 "params": { 00:26:35.954 "timeout_sec": 30 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_nvme_set_options", 00:26:35.954 "params": { 00:26:35.954 "action_on_timeout": "none", 00:26:35.954 "timeout_us": 0, 00:26:35.954 "timeout_admin_us": 0, 00:26:35.954 "keep_alive_timeout_ms": 10000, 00:26:35.954 "arbitration_burst": 0, 00:26:35.954 "low_priority_weight": 0, 00:26:35.954 "medium_priority_weight": 0, 00:26:35.954 "high_priority_weight": 0, 00:26:35.954 "nvme_adminq_poll_period_us": 10000, 00:26:35.954 "nvme_ioq_poll_period_us": 0, 00:26:35.954 "io_queue_requests": 0, 00:26:35.954 "delay_cmd_submit": true, 00:26:35.954 "transport_retry_count": 4, 00:26:35.954 "bdev_retry_count": 3, 00:26:35.954 "transport_ack_timeout": 0, 00:26:35.954 "ctrlr_loss_timeout_sec": 0, 00:26:35.954 "reconnect_delay_sec": 0, 00:26:35.954 "fast_io_fail_timeout_sec": 0, 00:26:35.954 "disable_auto_failback": false, 00:26:35.954 "generate_uuids": false, 00:26:35.954 "transport_tos": 0, 00:26:35.954 "nvme_error_stat": false, 00:26:35.954 "rdma_srq_size": 0, 00:26:35.954 "io_path_stat": false, 00:26:35.954 "allow_accel_sequence": false, 00:26:35.954 "rdma_max_cq_size": 0, 00:26:35.954 "rdma_cm_event_timeout_ms": 0, 00:26:35.954 "dhchap_digests": [ 00:26:35.954 "sha256", 00:26:35.954 "sha384", 00:26:35.954 "sha512" 00:26:35.954 ], 00:26:35.954 "dhchap_dhgroups": [ 00:26:35.954 "null", 00:26:35.954 "ffdhe2048", 00:26:35.954 "ffdhe3072", 00:26:35.954 "ffdhe4096", 00:26:35.954 "ffdhe6144", 00:26:35.954 "ffdhe8192" 00:26:35.954 ] 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_nvme_set_hotplug", 00:26:35.954 "params": { 00:26:35.954 "period_us": 100000, 00:26:35.954 "enable": false 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_malloc_create", 00:26:35.954 "params": { 00:26:35.954 "name": "malloc0", 00:26:35.954 "num_blocks": 8192, 00:26:35.954 "block_size": 4096, 00:26:35.954 "physical_block_size": 4096, 00:26:35.954 "uuid": "8dcefb97-bf84-4084-8a7f-38ab75594fa4", 00:26:35.954 "optimal_io_boundary": 0 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "bdev_wait_for_examine" 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "nbd", 00:26:35.954 "config": [] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "scheduler", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "framework_set_scheduler", 00:26:35.954 "params": { 00:26:35.954 "name": "static" 00:26:35.954 } 00:26:35.954 } 00:26:35.954 ] 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "subsystem": "nvmf", 00:26:35.954 "config": [ 00:26:35.954 { 00:26:35.954 "method": "nvmf_set_config", 00:26:35.954 "params": { 00:26:35.954 "discovery_filter": "match_any", 00:26:35.954 "admin_cmd_passthru": { 00:26:35.954 "identify_ctrlr": false 00:26:35.954 } 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "nvmf_set_max_subsystems", 00:26:35.954 "params": { 00:26:35.954 "max_subsystems": 1024 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "nvmf_set_crdt", 00:26:35.954 "params": { 00:26:35.954 "crdt1": 0, 00:26:35.954 "crdt2": 0, 00:26:35.954 "crdt3": 0 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "nvmf_create_transport", 00:26:35.954 "params": { 00:26:35.954 "trtype": "TCP", 00:26:35.954 "max_queue_depth": 128, 00:26:35.954 "max_io_qpairs_per_ctrlr": 127, 00:26:35.954 "in_capsule_data_size": 4096, 00:26:35.954 "max_io_size": 131072, 00:26:35.954 "io_unit_size": 131072, 00:26:35.954 "max_aq_depth": 128, 00:26:35.954 "num_shared_buffers": 511, 00:26:35.954 "buf_cache_size": 4294967295, 00:26:35.954 "dif_insert_or_strip": false, 00:26:35.954 "zcopy": false, 00:26:35.954 "c2h_success": false, 00:26:35.954 "sock_priority": 0, 00:26:35.954 "abort_timeout_sec": 1, 00:26:35.954 "ack_timeout": 0, 00:26:35.954 "data_wr_pool_size": 0 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "nvmf_create_subsystem", 00:26:35.954 "params": { 00:26:35.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.954 "allow_any_host": false, 00:26:35.954 "serial_number": "00000000000000000000", 00:26:35.954 "model_number": "SPDK bdev Controller", 00:26:35.954 "max_namespaces": 32, 00:26:35.954 "min_cntlid": 1, 00:26:35.954 "max_cntlid": 65519, 00:26:35.954 "ana_reporting": false 00:26:35.954 } 00:26:35.954 }, 00:26:35.954 { 00:26:35.954 "method": "nvmf_subsystem_add_host", 00:26:35.954 "params": { 00:26:35.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.954 "host": "nqn.2016-06.io.spdk:host1", 00:26:35.954 "psk": "key0" 00:26:35.954 } 00:26:35.955 }, 00:26:35.955 { 00:26:35.955 "method": "nvmf_subsystem_add_ns", 00:26:35.955 "params": { 00:26:35.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.955 "namespace": { 00:26:35.955 "nsid": 1, 00:26:35.955 "bdev_name": "malloc0", 00:26:35.955 "nguid": "8DCEFB97BF8440848A7F38AB75594FA4", 00:26:35.955 "uuid": "8dcefb97-bf84-4084-8a7f-38ab75594fa4", 00:26:35.955 "no_auto_visible": false 00:26:35.955 } 00:26:35.955 } 00:26:35.955 }, 00:26:35.955 { 00:26:35.955 "method": "nvmf_subsystem_add_listener", 00:26:35.955 "params": { 00:26:35.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.955 "listen_address": { 00:26:35.955 "trtype": "TCP", 00:26:35.955 "adrfam": "IPv4", 00:26:35.955 "traddr": "10.0.0.2", 00:26:35.955 "trsvcid": "4420" 00:26:35.955 }, 00:26:35.955 "secure_channel": false, 00:26:35.955 "sock_impl": "ssl" 00:26:35.955 } 00:26:35.955 } 00:26:35.955 ] 00:26:35.955 } 00:26:35.955 ] 00:26:35.955 }' 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4164560 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4164560 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4164560 ']' 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.955 08:22:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:36.213 [2024-07-21 08:22:45.593209] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:36.213 [2024-07-21 08:22:45.593295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.213 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.213 [2024-07-21 08:22:45.654860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.213 [2024-07-21 08:22:45.743472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.213 [2024-07-21 08:22:45.743523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.213 [2024-07-21 08:22:45.743552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.213 [2024-07-21 08:22:45.743563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.213 [2024-07-21 08:22:45.743573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.213 [2024-07-21 08:22:45.743669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.471 [2024-07-21 08:22:45.980932] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.471 [2024-07-21 08:22:46.019347] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:36.472 [2024-07-21 08:22:46.019587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=4164708 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 4164708 /var/tmp/bdevperf.sock 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4164708 ']' 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.038 08:22:46 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:26:37.038 "subsystems": [ 00:26:37.038 { 00:26:37.038 "subsystem": "keyring", 00:26:37.038 "config": [ 00:26:37.038 { 00:26:37.038 "method": "keyring_file_add_key", 00:26:37.038 "params": { 00:26:37.038 "name": "key0", 00:26:37.038 "path": "/tmp/tmp.JPBYj9wOX0" 00:26:37.038 } 00:26:37.038 } 00:26:37.038 ] 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "subsystem": "iobuf", 00:26:37.038 "config": [ 00:26:37.038 { 00:26:37.038 "method": "iobuf_set_options", 00:26:37.038 "params": { 00:26:37.038 "small_pool_count": 8192, 00:26:37.038 "large_pool_count": 1024, 00:26:37.038 "small_bufsize": 8192, 00:26:37.038 "large_bufsize": 135168 00:26:37.038 } 00:26:37.038 } 00:26:37.038 ] 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "subsystem": "sock", 00:26:37.038 "config": [ 00:26:37.038 { 00:26:37.038 "method": "sock_set_default_impl", 00:26:37.038 "params": { 00:26:37.038 "impl_name": "posix" 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "sock_impl_set_options", 00:26:37.038 "params": { 00:26:37.038 "impl_name": "ssl", 00:26:37.038 "recv_buf_size": 4096, 00:26:37.038 "send_buf_size": 4096, 00:26:37.038 "enable_recv_pipe": true, 00:26:37.038 "enable_quickack": false, 00:26:37.038 "enable_placement_id": 0, 00:26:37.038 "enable_zerocopy_send_server": true, 00:26:37.038 "enable_zerocopy_send_client": false, 00:26:37.038 "zerocopy_threshold": 0, 00:26:37.038 "tls_version": 0, 00:26:37.038 "enable_ktls": false 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "sock_impl_set_options", 00:26:37.038 "params": { 00:26:37.038 "impl_name": "posix", 00:26:37.038 "recv_buf_size": 2097152, 00:26:37.038 "send_buf_size": 2097152, 00:26:37.038 "enable_recv_pipe": true, 00:26:37.038 "enable_quickack": false, 00:26:37.038 "enable_placement_id": 0, 00:26:37.038 "enable_zerocopy_send_server": true, 00:26:37.038 "enable_zerocopy_send_client": false, 00:26:37.038 "zerocopy_threshold": 0, 00:26:37.038 "tls_version": 0, 00:26:37.038 "enable_ktls": false 00:26:37.038 } 00:26:37.038 } 00:26:37.038 ] 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "subsystem": "vmd", 00:26:37.038 "config": [] 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "subsystem": "accel", 00:26:37.038 "config": [ 00:26:37.038 { 00:26:37.038 "method": "accel_set_options", 00:26:37.038 "params": { 00:26:37.038 "small_cache_size": 128, 00:26:37.038 "large_cache_size": 16, 00:26:37.038 "task_count": 2048, 00:26:37.038 "sequence_count": 2048, 00:26:37.038 "buf_count": 2048 00:26:37.038 } 00:26:37.038 } 00:26:37.038 ] 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "subsystem": "bdev", 00:26:37.038 "config": [ 00:26:37.038 { 00:26:37.038 "method": "bdev_set_options", 00:26:37.038 "params": { 00:26:37.038 "bdev_io_pool_size": 65535, 00:26:37.038 "bdev_io_cache_size": 256, 00:26:37.038 "bdev_auto_examine": true, 00:26:37.038 "iobuf_small_cache_size": 128, 00:26:37.038 "iobuf_large_cache_size": 16 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "bdev_raid_set_options", 00:26:37.038 "params": { 00:26:37.038 "process_window_size_kb": 1024, 00:26:37.038 "process_max_bandwidth_mb_sec": 0 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "bdev_iscsi_set_options", 00:26:37.038 "params": { 00:26:37.038 "timeout_sec": 30 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "bdev_nvme_set_options", 00:26:37.038 "params": { 00:26:37.038 "action_on_timeout": "none", 00:26:37.038 "timeout_us": 0, 00:26:37.038 "timeout_admin_us": 0, 00:26:37.038 "keep_alive_timeout_ms": 10000, 00:26:37.038 "arbitration_burst": 0, 00:26:37.038 "low_priority_weight": 0, 00:26:37.038 "medium_priority_weight": 0, 00:26:37.038 "high_priority_weight": 0, 00:26:37.038 "nvme_adminq_poll_period_us": 10000, 00:26:37.038 "nvme_ioq_poll_period_us": 0, 00:26:37.038 "io_queue_requests": 512, 00:26:37.038 "delay_cmd_submit": true, 00:26:37.038 "transport_retry_count": 4, 00:26:37.038 "bdev_retry_count": 3, 00:26:37.038 "transport_ack_timeout": 0, 00:26:37.038 "ctrlr_loss_timeout_sec": 0, 00:26:37.038 "reconnect_delay_sec": 0, 00:26:37.038 "fast_io_fail_timeout_sec": 0, 00:26:37.038 "disable_auto_failback": false, 00:26:37.038 "generate_uuids": false, 00:26:37.038 "transport_tos": 0, 00:26:37.038 "nvme_error_stat": false, 00:26:37.038 "rdma_srq_size": 0, 00:26:37.038 "io_path_stat": false, 00:26:37.038 "allow_accel_sequence": false, 00:26:37.038 "rdma_max_cq_size": 0, 00:26:37.038 "rdma_cm_event_timeout_ms": 0, 00:26:37.038 "dhchap_digests": [ 00:26:37.038 "sha256", 00:26:37.038 "sha384", 00:26:37.038 "sha512" 00:26:37.038 ], 00:26:37.038 "dhchap_dhgroups": [ 00:26:37.038 "null", 00:26:37.038 "ffdhe2048", 00:26:37.038 "ffdhe3072", 00:26:37.038 "ffdhe4096", 00:26:37.038 "ffdhe6144", 00:26:37.038 "ffdhe8192" 00:26:37.038 ] 00:26:37.038 } 00:26:37.038 }, 00:26:37.038 { 00:26:37.038 "method": "bdev_nvme_attach_controller", 00:26:37.038 "params": { 00:26:37.038 "name": "nvme0", 00:26:37.038 "trtype": "TCP", 00:26:37.038 "adrfam": "IPv4", 00:26:37.038 "traddr": "10.0.0.2", 00:26:37.038 "trsvcid": "4420", 00:26:37.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.039 "prchk_reftag": false, 00:26:37.039 "prchk_guard": false, 00:26:37.039 "ctrlr_loss_timeout_sec": 0, 00:26:37.039 "reconnect_delay_sec": 0, 00:26:37.039 "fast_io_fail_timeout_sec": 0, 00:26:37.039 "psk": "key0", 00:26:37.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.039 "hdgst": false, 00:26:37.039 "ddgst": false 00:26:37.039 } 00:26:37.039 }, 00:26:37.039 { 00:26:37.039 "method": "bdev_nvme_set_hotplug", 00:26:37.039 "params": { 00:26:37.039 "period_us": 100000, 00:26:37.039 "enable": false 00:26:37.039 } 00:26:37.039 }, 00:26:37.039 { 00:26:37.039 "method": "bdev_enable_histogram", 00:26:37.039 "params": { 00:26:37.039 "name": "nvme0n1", 00:26:37.039 "enable": true 00:26:37.039 } 00:26:37.039 }, 00:26:37.039 { 00:26:37.039 "method": "bdev_wait_for_examine" 00:26:37.039 } 00:26:37.039 ] 00:26:37.039 }, 00:26:37.039 { 00:26:37.039 "subsystem": "nbd", 00:26:37.039 "config": [] 00:26:37.039 } 00:26:37.039 ] 00:26:37.039 }' 00:26:37.039 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:37.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:37.039 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.039 08:22:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:37.296 [2024-07-21 08:22:46.696947] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:37.296 [2024-07-21 08:22:46.697020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164708 ] 00:26:37.296 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.296 [2024-07-21 08:22:46.758994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.296 [2024-07-21 08:22:46.849639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.554 [2024-07-21 08:22:47.026513] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:38.118 08:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.118 08:22:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:26:38.118 08:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.118 08:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:26:38.375 08:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.375 08:22:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:38.633 Running I/O for 1 seconds... 00:26:39.563 00:26:39.563 Latency(us) 00:26:39.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.563 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:39.563 Verification LBA range: start 0x0 length 0x2000 00:26:39.563 nvme0n1 : 1.02 3310.80 12.93 0.00 0.00 38237.95 6796.33 33593.27 00:26:39.563 =================================================================================================================== 00:26:39.563 Total : 3310.80 12.93 0.00 0.00 38237.95 6796.33 33593.27 00:26:39.563 0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:39.563 nvmf_trace.0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4164708 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4164708 ']' 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4164708 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164708 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164708' 00:26:39.563 killing process with pid 4164708 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4164708 00:26:39.563 Received shutdown signal, test time was about 1.000000 seconds 00:26:39.563 00:26:39.563 Latency(us) 00:26:39.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.563 =================================================================================================================== 00:26:39.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.563 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4164708 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.820 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.820 rmmod nvme_tcp 00:26:39.820 rmmod nvme_fabrics 00:26:39.820 rmmod nvme_keyring 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4164560 ']' 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4164560 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4164560 ']' 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4164560 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164560 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164560' 00:26:40.077 killing process with pid 4164560 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4164560 00:26:40.077 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4164560 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.335 08:22:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.265 08:22:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.265 08:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eknzdKUwyC /tmp/tmp.eqa9wJmXbg /tmp/tmp.JPBYj9wOX0 00:26:42.265 00:26:42.265 real 1m18.692s 00:26:42.265 user 2m2.175s 00:26:42.265 sys 0m27.115s 00:26:42.265 08:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:42.265 08:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:42.265 ************************************ 00:26:42.265 END TEST nvmf_tls 00:26:42.265 ************************************ 00:26:42.265 08:22:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:42.265 08:22:51 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:42.265 08:22:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:42.265 08:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.265 08:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.265 ************************************ 00:26:42.265 START TEST nvmf_fips 00:26:42.265 ************************************ 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:42.265 * Looking for test storage... 00:26:42.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.265 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.522 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:26:42.523 08:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:26:42.523 Error setting digest 00:26:42.523 00F2DDF3817F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:42.523 00F2DDF3817F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.523 08:22:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:44.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:44.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:44.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:44.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.422 08:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.422 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.422 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.422 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.422 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.681 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.681 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.681 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:26:44.681 00:26:44.681 --- 10.0.0.2 ping statistics --- 00:26:44.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.681 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:44.681 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:26:44.681 00:26:44.681 --- 10.0.0.1 ping statistics --- 00:26:44.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.681 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4167073 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4167073 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4167073 ']' 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.682 08:22:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:44.682 [2024-07-21 08:22:54.192195] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:44.682 [2024-07-21 08:22:54.192298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.682 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.682 [2024-07-21 08:22:54.260575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.939 [2024-07-21 08:22:54.353776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.939 [2024-07-21 08:22:54.353843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.939 [2024-07-21 08:22:54.353869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.939 [2024-07-21 08:22:54.353883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.939 [2024-07-21 08:22:54.353894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.939 [2024-07-21 08:22:54.353924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:45.870 [2024-07-21 08:22:55.399997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.870 [2024-07-21 08:22:55.415998] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:45.870 [2024-07-21 08:22:55.416191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.870 [2024-07-21 08:22:55.448658] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:45.870 malloc0 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4167230 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4167230 /var/tmp/bdevperf.sock 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4167230 ']' 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.870 08:22:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:46.127 [2024-07-21 08:22:55.542060] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:26:46.127 [2024-07-21 08:22:55.542163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167230 ] 00:26:46.127 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.127 [2024-07-21 08:22:55.604465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.127 [2024-07-21 08:22:55.697797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.057 08:22:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.057 08:22:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:47.057 08:22:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:47.314 [2024-07-21 08:22:56.715498] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:47.314 [2024-07-21 08:22:56.715650] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:47.314 TLSTESTn1 00:26:47.314 08:22:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:47.314 Running I/O for 10 seconds... 00:26:59.495 00:26:59.495 Latency(us) 00:26:59.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.495 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:59.495 Verification LBA range: start 0x0 length 0x2000 00:26:59.495 TLSTESTn1 : 10.02 3506.67 13.70 0.00 0.00 36440.32 7621.59 39612.87 00:26:59.495 =================================================================================================================== 00:26:59.495 Total : 3506.67 13.70 0.00 0.00 36440.32 7621.59 39612.87 00:26:59.495 0 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:59.495 08:23:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:59.495 nvmf_trace.0 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4167230 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4167230 ']' 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4167230 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4167230 00:26:59.495 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4167230' 00:26:59.496 killing process with pid 4167230 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4167230 00:26:59.496 Received shutdown signal, test time was about 10.000000 seconds 00:26:59.496 00:26:59.496 Latency(us) 00:26:59.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.496 =================================================================================================================== 00:26:59.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.496 [2024-07-21 08:23:07.054091] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4167230 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.496 rmmod nvme_tcp 00:26:59.496 rmmod nvme_fabrics 00:26:59.496 rmmod nvme_keyring 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4167073 ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4167073 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4167073 ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4167073 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4167073 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4167073' 00:26:59.496 killing process with pid 4167073 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4167073 00:26:59.496 [2024-07-21 08:23:07.327785] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4167073 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.496 08:23:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.061 08:23:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:00.061 08:23:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:27:00.061 00:27:00.061 real 0m17.764s 00:27:00.061 user 0m23.895s 00:27:00.061 sys 0m5.205s 00:27:00.061 08:23:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:00.061 08:23:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:00.061 ************************************ 00:27:00.061 END TEST nvmf_fips 00:27:00.061 ************************************ 00:27:00.061 08:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:00.061 08:23:09 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:27:00.061 08:23:09 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:00.061 08:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:00.061 08:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.061 08:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.061 ************************************ 00:27:00.061 START TEST nvmf_fuzz 00:27:00.061 ************************************ 00:27:00.061 08:23:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:00.320 * Looking for test storage... 00:27:00.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.320 08:23:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:02.216 00:27:02.216 --- 10.0.0.2 ping statistics --- 00:27:02.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.216 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:27:02.216 00:27:02.216 --- 10.0.0.1 ping statistics --- 00:27:02.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.216 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4170496 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4170496 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4170496 ']' 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.216 08:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.474 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.730 Malloc0 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.730 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:02.731 08:23:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:34.856 Fuzzing completed. Shutting down the fuzz application 00:27:34.856 00:27:34.856 Dumping successful admin opcodes: 00:27:34.856 8, 9, 10, 24, 00:27:34.856 Dumping successful io opcodes: 00:27:34.856 0, 9, 00:27:34.856 NS: 0x200003aeff00 I/O qp, Total commands completed: 434805, total successful commands: 2540, random_seed: 2072558976 00:27:34.856 NS: 0x200003aeff00 admin qp, Total commands completed: 53664, total successful commands: 432, random_seed: 1236999744 00:27:34.856 08:23:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:34.856 Fuzzing completed. Shutting down the fuzz application 00:27:34.856 00:27:34.856 Dumping successful admin opcodes: 00:27:34.856 24, 00:27:34.856 Dumping successful io opcodes: 00:27:34.856 00:27:34.856 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1779343176 00:27:34.856 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1779462678 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.856 rmmod nvme_tcp 00:27:34.856 rmmod nvme_fabrics 00:27:34.856 rmmod nvme_keyring 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4170496 ']' 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4170496 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4170496 ']' 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 4170496 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:34.856 08:23:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4170496 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4170496' 00:27:34.856 killing process with pid 4170496 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 4170496 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 4170496 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.856 08:23:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.759 08:23:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.759 08:23:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:36.759 00:27:36.759 real 0m36.685s 00:27:36.759 user 0m50.974s 00:27:36.759 sys 0m14.985s 00:27:36.759 08:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.759 08:23:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:36.759 ************************************ 00:27:36.759 END TEST nvmf_fuzz 00:27:36.759 ************************************ 00:27:36.759 08:23:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:36.759 08:23:46 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:36.759 08:23:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:36.759 08:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.759 08:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.759 ************************************ 00:27:36.759 START TEST nvmf_multiconnection 00:27:36.759 ************************************ 00:27:36.759 08:23:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:37.016 * Looking for test storage... 00:27:37.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:37.016 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.017 08:23:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.914 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:27:38.915 00:27:38.915 --- 10.0.0.2 ping statistics --- 00:27:38.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.915 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:38.915 00:27:38.915 --- 10.0.0.1 ping statistics --- 00:27:38.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.915 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4176145 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4176145 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 4176145 ']' 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.915 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.916 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.916 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.172 [2024-07-21 08:23:48.545590] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:27:39.172 [2024-07-21 08:23:48.545704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.172 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.172 [2024-07-21 08:23:48.615792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.172 [2024-07-21 08:23:48.712333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.172 [2024-07-21 08:23:48.712390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.172 [2024-07-21 08:23:48.712418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.172 [2024-07-21 08:23:48.712432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.172 [2024-07-21 08:23:48.712443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.172 [2024-07-21 08:23:48.712538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.172 [2024-07-21 08:23:48.712603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.172 [2024-07-21 08:23:48.712663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.172 [2024-07-21 08:23:48.712667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 [2024-07-21 08:23:48.880648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 Malloc1 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 [2024-07-21 08:23:48.937987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 Malloc2 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 Malloc3 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.429 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.686 Malloc4 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.686 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 Malloc5 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 Malloc6 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 Malloc7 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 Malloc8 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.687 Malloc9 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.687 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.944 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 Malloc10 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 Malloc11 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.945 08:23:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:40.508 08:23:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:40.508 08:23:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:40.508 08:23:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.508 08:23:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:40.508 08:23:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:43.030 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:43.030 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:43.030 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:43.031 08:23:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.551 08:23:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:45.808 08:23:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:45.808 08:23:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:45.808 08:23:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:45.808 08:23:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:45.808 08:23:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.334 08:23:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:48.591 08:23:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:48.592 08:23:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:48.592 08:23:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:48.592 08:23:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:48.592 08:23:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.525 08:24:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:51.457 08:24:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:51.457 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:51.457 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:51.457 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:51.457 08:24:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:53.391 08:24:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:54.319 08:24:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:54.319 08:24:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:54.319 08:24:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:54.319 08:24:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:54.319 08:24:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.210 08:24:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:57.142 08:24:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:57.142 08:24:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.142 08:24:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:57.142 08:24:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:57.142 08:24:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.039 08:24:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:59.971 08:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:59.971 08:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.971 08:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:59.971 08:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:59.971 08:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:01.859 08:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:02.787 08:24:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:02.787 08:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:02.787 08:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:02.787 08:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:02.787 08:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.719 08:24:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:05.654 08:24:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:05.654 08:24:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:05.654 08:24:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:05.654 08:24:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:05.654 08:24:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.550 08:24:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:08.479 08:24:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:08.479 08:24:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:08.479 08:24:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:08.479 08:24:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:08.479 08:24:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:11.001 08:24:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:11.001 [global] 00:28:11.001 thread=1 00:28:11.001 invalidate=1 00:28:11.001 rw=read 00:28:11.001 time_based=1 00:28:11.001 runtime=10 00:28:11.001 ioengine=libaio 00:28:11.001 direct=1 00:28:11.001 bs=262144 00:28:11.001 iodepth=64 00:28:11.001 norandommap=1 00:28:11.001 numjobs=1 00:28:11.001 00:28:11.001 [job0] 00:28:11.001 filename=/dev/nvme0n1 00:28:11.001 [job1] 00:28:11.001 filename=/dev/nvme10n1 00:28:11.001 [job2] 00:28:11.001 filename=/dev/nvme1n1 00:28:11.001 [job3] 00:28:11.001 filename=/dev/nvme2n1 00:28:11.001 [job4] 00:28:11.001 filename=/dev/nvme3n1 00:28:11.001 [job5] 00:28:11.001 filename=/dev/nvme4n1 00:28:11.001 [job6] 00:28:11.001 filename=/dev/nvme5n1 00:28:11.001 [job7] 00:28:11.001 filename=/dev/nvme6n1 00:28:11.001 [job8] 00:28:11.001 filename=/dev/nvme7n1 00:28:11.001 [job9] 00:28:11.001 filename=/dev/nvme8n1 00:28:11.001 [job10] 00:28:11.001 filename=/dev/nvme9n1 00:28:11.001 Could not set queue depth (nvme0n1) 00:28:11.001 Could not set queue depth (nvme10n1) 00:28:11.001 Could not set queue depth (nvme1n1) 00:28:11.001 Could not set queue depth (nvme2n1) 00:28:11.001 Could not set queue depth (nvme3n1) 00:28:11.001 Could not set queue depth (nvme4n1) 00:28:11.001 Could not set queue depth (nvme5n1) 00:28:11.001 Could not set queue depth (nvme6n1) 00:28:11.001 Could not set queue depth (nvme7n1) 00:28:11.001 Could not set queue depth (nvme8n1) 00:28:11.001 Could not set queue depth (nvme9n1) 00:28:11.001 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:11.001 fio-3.35 00:28:11.001 Starting 11 threads 00:28:23.211 00:28:23.211 job0: (groupid=0, jobs=1): err= 0: pid=4181103: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=636, BW=159MiB/s (167MB/s)(1609MiB/10114msec) 00:28:23.211 slat (usec): min=9, max=85294, avg=995.19, stdev=4323.44 00:28:23.211 clat (usec): min=1775, max=283319, avg=99509.94, stdev=49863.04 00:28:23.211 lat (usec): min=1849, max=283335, avg=100505.14, stdev=50613.28 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 43], 00:28:23.211 | 30.00th=[ 71], 40.00th=[ 93], 50.00th=[ 109], 60.00th=[ 121], 00:28:23.211 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 163], 95.00th=[ 176], 00:28:23.211 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 211], 99.95th=[ 271], 00:28:23.211 | 99.99th=[ 284] 00:28:23.211 bw ( KiB/s): min=91648, max=370176, per=8.46%, avg=163119.45, stdev=72843.18, samples=20 00:28:23.211 iops : min= 358, max= 1446, avg=637.10, stdev=284.59, samples=20 00:28:23.211 lat (msec) : 2=0.08%, 4=0.53%, 10=1.35%, 20=4.29%, 50=17.54% 00:28:23.211 lat (msec) : 100=20.09%, 250=56.04%, 500=0.08% 00:28:23.211 cpu : usr=0.25%, sys=1.65%, ctx=1433, majf=0, minf=4097 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=6435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job1: (groupid=0, jobs=1): err= 0: pid=4181105: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=587, BW=147MiB/s (154MB/s)(1485MiB/10114msec) 00:28:23.211 slat (usec): min=9, max=50928, avg=1176.38, stdev=4191.65 00:28:23.211 clat (usec): min=1128, max=273766, avg=107715.64, stdev=42881.82 00:28:23.211 lat (usec): min=1150, max=273791, avg=108892.02, stdev=43443.49 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 48], 20.00th=[ 78], 00:28:23.211 | 30.00th=[ 94], 40.00th=[ 106], 50.00th=[ 115], 60.00th=[ 125], 00:28:23.211 | 70.00th=[ 132], 80.00th=[ 138], 90.00th=[ 155], 95.00th=[ 167], 00:28:23.211 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 275], 99.95th=[ 275], 00:28:23.211 | 99.99th=[ 275] 00:28:23.211 bw ( KiB/s): min=111616, max=222720, per=7.80%, avg=150411.85, stdev=32062.75, samples=20 00:28:23.211 iops : min= 436, max= 870, avg=587.50, stdev=125.26, samples=20 00:28:23.211 lat (msec) : 2=0.51%, 4=1.16%, 10=2.71%, 20=2.39%, 50=3.79% 00:28:23.211 lat (msec) : 100=25.93%, 250=63.33%, 500=0.19% 00:28:23.211 cpu : usr=0.40%, sys=1.52%, ctx=1338, majf=0, minf=4097 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job2: (groupid=0, jobs=1): err= 0: pid=4181108: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=525, BW=131MiB/s (138MB/s)(1324MiB/10077msec) 00:28:23.211 slat (usec): min=14, max=72321, avg=1766.37, stdev=5035.95 00:28:23.211 clat (msec): min=12, max=225, avg=119.90, stdev=28.02 00:28:23.211 lat (msec): min=12, max=228, avg=121.66, stdev=28.52 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 38], 5.00th=[ 83], 10.00th=[ 91], 20.00th=[ 101], 00:28:23.211 | 30.00th=[ 107], 40.00th=[ 111], 50.00th=[ 117], 60.00th=[ 125], 00:28:23.211 | 70.00th=[ 131], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 167], 00:28:23.211 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 220], 99.95th=[ 222], 00:28:23.211 | 99.99th=[ 226] 00:28:23.211 bw ( KiB/s): min=84992, max=194048, per=6.95%, avg=133976.45, stdev=25170.13, samples=20 00:28:23.211 iops : min= 332, max= 758, avg=523.30, stdev=98.31, samples=20 00:28:23.211 lat (msec) : 20=0.49%, 50=0.91%, 100=18.26%, 250=80.35% 00:28:23.211 cpu : usr=0.29%, sys=1.78%, ctx=1038, majf=0, minf=3725 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=5297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job3: (groupid=0, jobs=1): err= 0: pid=4181109: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=640, BW=160MiB/s (168MB/s)(1605MiB/10022msec) 00:28:23.211 slat (usec): min=9, max=111366, avg=1040.69, stdev=4530.66 00:28:23.211 clat (msec): min=2, max=241, avg=98.80, stdev=52.20 00:28:23.211 lat (msec): min=2, max=302, avg=99.84, stdev=52.82 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 39], 00:28:23.211 | 30.00th=[ 61], 40.00th=[ 89], 50.00th=[ 110], 60.00th=[ 122], 00:28:23.211 | 70.00th=[ 132], 80.00th=[ 144], 90.00th=[ 165], 95.00th=[ 178], 00:28:23.211 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 232], 99.95th=[ 236], 00:28:23.211 | 99.99th=[ 241] 00:28:23.211 bw ( KiB/s): min=92160, max=390144, per=8.44%, avg=162751.20, stdev=82809.09, samples=20 00:28:23.211 iops : min= 360, max= 1524, avg=635.70, stdev=323.49, samples=20 00:28:23.211 lat (msec) : 4=0.28%, 10=1.88%, 20=3.91%, 50=18.44%, 100=20.79% 00:28:23.211 lat (msec) : 250=54.70% 00:28:23.211 cpu : usr=0.30%, sys=1.76%, ctx=1415, majf=0, minf=4097 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=6421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job4: (groupid=0, jobs=1): err= 0: pid=4181110: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=570, BW=143MiB/s (150MB/s)(1444MiB/10117msec) 00:28:23.211 slat (usec): min=10, max=125941, avg=1641.19, stdev=5411.59 00:28:23.211 clat (usec): min=1396, max=309314, avg=110363.68, stdev=48193.24 00:28:23.211 lat (usec): min=1414, max=309333, avg=112004.86, stdev=49039.90 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 66], 00:28:23.211 | 30.00th=[ 82], 40.00th=[ 105], 50.00th=[ 116], 60.00th=[ 126], 00:28:23.211 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 184], 00:28:23.211 | 99.00th=[ 218], 99.50th=[ 234], 99.90th=[ 284], 99.95th=[ 284], 00:28:23.211 | 99.99th=[ 309] 00:28:23.211 bw ( KiB/s): min=89088, max=264192, per=7.58%, avg=146188.45, stdev=55875.39, samples=20 00:28:23.211 iops : min= 348, max= 1032, avg=571.00, stdev=218.28, samples=20 00:28:23.211 lat (msec) : 2=0.09%, 4=0.74%, 10=1.35%, 20=1.49%, 50=7.67% 00:28:23.211 lat (msec) : 100=26.37%, 250=61.94%, 500=0.35% 00:28:23.211 cpu : usr=0.34%, sys=1.98%, ctx=1200, majf=0, minf=4097 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=5775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job5: (groupid=0, jobs=1): err= 0: pid=4181112: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=615, BW=154MiB/s (161MB/s)(1558MiB/10117msec) 00:28:23.211 slat (usec): min=14, max=82559, avg=1489.77, stdev=4846.51 00:28:23.211 clat (usec): min=1303, max=301797, avg=102366.25, stdev=38136.41 00:28:23.211 lat (usec): min=1355, max=301823, avg=103856.02, stdev=38777.62 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 54], 20.00th=[ 66], 00:28:23.211 | 30.00th=[ 83], 40.00th=[ 99], 50.00th=[ 108], 60.00th=[ 115], 00:28:23.211 | 70.00th=[ 123], 80.00th=[ 133], 90.00th=[ 144], 95.00th=[ 159], 00:28:23.211 | 99.00th=[ 203], 99.50th=[ 234], 99.90th=[ 268], 99.95th=[ 268], 00:28:23.211 | 99.99th=[ 300] 00:28:23.211 bw ( KiB/s): min=117248, max=249856, per=8.19%, avg=157836.60, stdev=37264.46, samples=20 00:28:23.211 iops : min= 458, max= 976, avg=616.50, stdev=145.60, samples=20 00:28:23.211 lat (msec) : 2=0.11%, 4=0.18%, 10=0.13%, 20=1.14%, 50=6.45% 00:28:23.211 lat (msec) : 100=33.71%, 250=58.06%, 500=0.22% 00:28:23.211 cpu : usr=0.30%, sys=2.13%, ctx=1214, majf=0, minf=4097 00:28:23.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:23.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.211 issued rwts: total=6230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.211 job6: (groupid=0, jobs=1): err= 0: pid=4181113: Sun Jul 21 08:24:30 2024 00:28:23.211 read: IOPS=818, BW=205MiB/s (214MB/s)(2070MiB/10118msec) 00:28:23.211 slat (usec): min=12, max=107092, avg=1177.62, stdev=3780.43 00:28:23.211 clat (msec): min=23, max=283, avg=76.97, stdev=54.39 00:28:23.211 lat (msec): min=23, max=290, avg=78.15, stdev=55.20 00:28:23.211 clat percentiles (msec): 00:28:23.211 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:28:23.211 | 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 46], 60.00th=[ 79], 00:28:23.211 | 70.00th=[ 115], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 171], 00:28:23.212 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 264], 99.95th=[ 284], 00:28:23.212 | 99.99th=[ 284] 00:28:23.212 bw ( KiB/s): min=88576, max=524800, per=10.91%, avg=210266.05, stdev=154202.52, samples=20 00:28:23.212 iops : min= 346, max= 2050, avg=821.35, stdev=602.35, samples=20 00:28:23.212 lat (msec) : 50=51.82%, 100=12.97%, 250=35.06%, 500=0.14% 00:28:23.212 cpu : usr=0.43%, sys=2.40%, ctx=1551, majf=0, minf=4097 00:28:23.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:23.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.212 issued rwts: total=8279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.212 job7: (groupid=0, jobs=1): err= 0: pid=4181114: Sun Jul 21 08:24:30 2024 00:28:23.212 read: IOPS=560, BW=140MiB/s (147MB/s)(1412MiB/10078msec) 00:28:23.212 slat (usec): min=13, max=125449, avg=1481.75, stdev=5151.67 00:28:23.212 clat (usec): min=1629, max=278800, avg=112636.49, stdev=40027.22 00:28:23.212 lat (usec): min=1699, max=278826, avg=114118.25, stdev=40736.96 00:28:23.212 clat percentiles (msec): 00:28:23.212 | 1.00th=[ 15], 5.00th=[ 26], 10.00th=[ 51], 20.00th=[ 91], 00:28:23.212 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 124], 00:28:23.212 | 70.00th=[ 132], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 167], 00:28:23.212 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 239], 99.95th=[ 247], 00:28:23.212 | 99.99th=[ 279] 00:28:23.212 bw ( KiB/s): min=99328, max=217088, per=7.41%, avg=142962.35, stdev=34560.07, samples=20 00:28:23.212 iops : min= 388, max= 848, avg=558.40, stdev=135.01, samples=20 00:28:23.212 lat (msec) : 2=0.02%, 4=0.02%, 10=0.19%, 20=3.22%, 50=6.36% 00:28:23.212 lat (msec) : 100=17.65%, 250=72.50%, 500=0.04% 00:28:23.212 cpu : usr=0.36%, sys=1.94%, ctx=1208, majf=0, minf=4097 00:28:23.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:23.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.212 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.212 job8: (groupid=0, jobs=1): err= 0: pid=4181115: Sun Jul 21 08:24:30 2024 00:28:23.212 read: IOPS=627, BW=157MiB/s (164MB/s)(1587MiB/10120msec) 00:28:23.212 slat (usec): min=13, max=107420, avg=1274.31, stdev=4813.23 00:28:23.212 clat (msec): min=2, max=292, avg=100.67, stdev=55.76 00:28:23.212 lat (msec): min=2, max=292, avg=101.95, stdev=56.50 00:28:23.212 clat percentiles (msec): 00:28:23.212 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 36], 00:28:23.212 | 30.00th=[ 54], 40.00th=[ 80], 50.00th=[ 115], 60.00th=[ 127], 00:28:23.212 | 70.00th=[ 138], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 180], 00:28:23.212 | 99.00th=[ 199], 99.50th=[ 226], 99.90th=[ 292], 99.95th=[ 292], 00:28:23.212 | 99.99th=[ 292] 00:28:23.212 bw ( KiB/s): min=89600, max=398848, per=8.34%, avg=160857.15, stdev=90672.52, samples=20 00:28:23.212 iops : min= 350, max= 1558, avg=628.30, stdev=354.21, samples=20 00:28:23.212 lat (msec) : 4=0.17%, 10=2.27%, 20=3.36%, 50=22.67%, 100=15.28% 00:28:23.212 lat (msec) : 250=55.83%, 500=0.43% 00:28:23.212 cpu : usr=0.38%, sys=2.00%, ctx=1301, majf=0, minf=4097 00:28:23.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:23.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.212 issued rwts: total=6348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.212 job9: (groupid=0, jobs=1): err= 0: pid=4181116: Sun Jul 21 08:24:30 2024 00:28:23.212 read: IOPS=1097, BW=274MiB/s (288MB/s)(2766MiB/10078msec) 00:28:23.212 slat (usec): min=10, max=98800, avg=813.63, stdev=3140.38 00:28:23.212 clat (usec): min=1617, max=239211, avg=57446.17, stdev=43864.79 00:28:23.212 lat (usec): min=1643, max=301457, avg=58259.80, stdev=44472.35 00:28:23.212 clat percentiles (msec): 00:28:23.212 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 27], 20.00th=[ 30], 00:28:23.212 | 30.00th=[ 31], 40.00th=[ 33], 50.00th=[ 37], 60.00th=[ 44], 00:28:23.212 | 70.00th=[ 59], 80.00th=[ 90], 90.00th=[ 133], 95.00th=[ 155], 00:28:23.212 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 222], 99.95th=[ 230], 00:28:23.212 | 99.99th=[ 236] 00:28:23.212 bw ( KiB/s): min=101888, max=567296, per=14.60%, avg=281588.05, stdev=153382.56, samples=20 00:28:23.212 iops : min= 398, max= 2216, avg=1099.90, stdev=599.11, samples=20 00:28:23.212 lat (msec) : 2=0.03%, 4=0.50%, 10=1.69%, 20=3.07%, 50=58.88% 00:28:23.212 lat (msec) : 100=17.58%, 250=18.26% 00:28:23.212 cpu : usr=0.59%, sys=3.25%, ctx=1959, majf=0, minf=4097 00:28:23.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:23.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.212 issued rwts: total=11064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.212 job10: (groupid=0, jobs=1): err= 0: pid=4181117: Sun Jul 21 08:24:30 2024 00:28:23.212 read: IOPS=872, BW=218MiB/s (229MB/s)(2196MiB/10074msec) 00:28:23.212 slat (usec): min=9, max=123294, avg=865.15, stdev=3502.36 00:28:23.212 clat (usec): min=1264, max=239850, avg=72476.91, stdev=44695.47 00:28:23.212 lat (usec): min=1295, max=239889, avg=73342.07, stdev=45230.02 00:28:23.212 clat percentiles (msec): 00:28:23.212 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 33], 00:28:23.212 | 30.00th=[ 41], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 70], 00:28:23.212 | 70.00th=[ 92], 80.00th=[ 117], 90.00th=[ 140], 95.00th=[ 159], 00:28:23.212 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 228], 99.95th=[ 228], 00:28:23.212 | 99.99th=[ 241] 00:28:23.212 bw ( KiB/s): min=99840, max=479232, per=11.58%, avg=223272.90, stdev=106947.49, samples=20 00:28:23.212 iops : min= 390, max= 1872, avg=872.10, stdev=417.81, samples=20 00:28:23.212 lat (msec) : 2=0.01%, 4=0.64%, 10=1.67%, 20=2.74%, 50=33.68% 00:28:23.212 lat (msec) : 100=34.49%, 250=26.76% 00:28:23.212 cpu : usr=0.37%, sys=2.64%, ctx=1637, majf=0, minf=4097 00:28:23.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:23.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.212 issued rwts: total=8785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.212 00:28:23.212 Run status group 0 (all jobs): 00:28:23.212 READ: bw=1883MiB/s (1974MB/s), 131MiB/s-274MiB/s (138MB/s-288MB/s), io=18.6GiB (20.0GB), run=10022-10120msec 00:28:23.212 00:28:23.212 Disk stats (read/write): 00:28:23.212 nvme0n1: ios=12587/0, merge=0/0, ticks=1231234/0, in_queue=1231234, util=96.96% 00:28:23.212 nvme10n1: ios=11715/0, merge=0/0, ticks=1234982/0, in_queue=1234982, util=97.20% 00:28:23.212 nvme1n1: ios=10357/0, merge=0/0, ticks=1232221/0, in_queue=1232221, util=97.51% 00:28:23.212 nvme2n1: ios=12521/0, merge=0/0, ticks=1242115/0, in_queue=1242115, util=97.67% 00:28:23.212 nvme3n1: ios=11369/0, merge=0/0, ticks=1226948/0, in_queue=1226948, util=97.74% 00:28:23.212 nvme4n1: ios=12285/0, merge=0/0, ticks=1229401/0, in_queue=1229401, util=98.13% 00:28:23.212 nvme5n1: ios=16368/0, merge=0/0, ticks=1227731/0, in_queue=1227731, util=98.30% 00:28:23.212 nvme6n1: ios=11055/0, merge=0/0, ticks=1233940/0, in_queue=1233940, util=98.43% 00:28:23.212 nvme7n1: ios=12498/0, merge=0/0, ticks=1227950/0, in_queue=1227950, util=98.88% 00:28:23.212 nvme8n1: ios=21882/0, merge=0/0, ticks=1233310/0, in_queue=1233310, util=99.07% 00:28:23.212 nvme9n1: ios=17272/0, merge=0/0, ticks=1235975/0, in_queue=1235975, util=99.22% 00:28:23.212 08:24:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:23.212 [global] 00:28:23.212 thread=1 00:28:23.212 invalidate=1 00:28:23.212 rw=randwrite 00:28:23.212 time_based=1 00:28:23.212 runtime=10 00:28:23.212 ioengine=libaio 00:28:23.212 direct=1 00:28:23.212 bs=262144 00:28:23.212 iodepth=64 00:28:23.212 norandommap=1 00:28:23.212 numjobs=1 00:28:23.212 00:28:23.212 [job0] 00:28:23.212 filename=/dev/nvme0n1 00:28:23.212 [job1] 00:28:23.212 filename=/dev/nvme10n1 00:28:23.212 [job2] 00:28:23.212 filename=/dev/nvme1n1 00:28:23.212 [job3] 00:28:23.212 filename=/dev/nvme2n1 00:28:23.212 [job4] 00:28:23.212 filename=/dev/nvme3n1 00:28:23.212 [job5] 00:28:23.212 filename=/dev/nvme4n1 00:28:23.212 [job6] 00:28:23.212 filename=/dev/nvme5n1 00:28:23.212 [job7] 00:28:23.212 filename=/dev/nvme6n1 00:28:23.212 [job8] 00:28:23.212 filename=/dev/nvme7n1 00:28:23.212 [job9] 00:28:23.212 filename=/dev/nvme8n1 00:28:23.212 [job10] 00:28:23.212 filename=/dev/nvme9n1 00:28:23.212 Could not set queue depth (nvme0n1) 00:28:23.212 Could not set queue depth (nvme10n1) 00:28:23.212 Could not set queue depth (nvme1n1) 00:28:23.212 Could not set queue depth (nvme2n1) 00:28:23.212 Could not set queue depth (nvme3n1) 00:28:23.212 Could not set queue depth (nvme4n1) 00:28:23.212 Could not set queue depth (nvme5n1) 00:28:23.212 Could not set queue depth (nvme6n1) 00:28:23.212 Could not set queue depth (nvme7n1) 00:28:23.212 Could not set queue depth (nvme8n1) 00:28:23.212 Could not set queue depth (nvme9n1) 00:28:23.212 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:23.212 fio-3.35 00:28:23.212 Starting 11 threads 00:28:33.218 00:28:33.218 job0: (groupid=0, jobs=1): err= 0: pid=4182137: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=309, BW=77.4MiB/s (81.1MB/s)(789MiB/10196msec); 0 zone resets 00:28:33.218 slat (usec): min=23, max=53345, avg=2618.14, stdev=6064.33 00:28:33.218 clat (usec): min=1808, max=415886, avg=204027.55, stdev=92664.72 00:28:33.218 lat (msec): min=2, max=415, avg=206.65, stdev=93.96 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 79], 20.00th=[ 125], 00:28:33.218 | 30.00th=[ 159], 40.00th=[ 178], 50.00th=[ 207], 60.00th=[ 224], 00:28:33.218 | 70.00th=[ 255], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 351], 00:28:33.218 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 418], 00:28:33.218 | 99.99th=[ 418] 00:28:33.218 bw ( KiB/s): min=40960, max=169984, per=6.45%, avg=79180.80, stdev=34602.65, samples=20 00:28:33.218 iops : min= 160, max= 664, avg=309.30, stdev=135.17, samples=20 00:28:33.218 lat (msec) : 2=0.03%, 4=0.13%, 10=1.33%, 20=1.74%, 50=2.47% 00:28:33.218 lat (msec) : 100=8.17%, 250=55.20%, 500=30.93% 00:28:33.218 cpu : usr=1.12%, sys=0.94%, ctx=1352, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,3156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job1: (groupid=0, jobs=1): err= 0: pid=4182149: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=447, BW=112MiB/s (117MB/s)(1126MiB/10071msec); 0 zone resets 00:28:33.218 slat (usec): min=19, max=43122, avg=1667.73, stdev=4581.95 00:28:33.218 clat (usec): min=1400, max=371266, avg=141383.38, stdev=96210.29 00:28:33.218 lat (usec): min=1691, max=371418, avg=143051.11, stdev=97563.29 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 52], 00:28:33.218 | 30.00th=[ 77], 40.00th=[ 99], 50.00th=[ 131], 60.00th=[ 163], 00:28:33.218 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 300], 95.00th=[ 330], 00:28:33.218 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:28:33.218 | 99.99th=[ 372] 00:28:33.218 bw ( KiB/s): min=49152, max=283190, per=9.27%, avg=113717.90, stdev=62326.20, samples=20 00:28:33.218 iops : min= 192, max= 1106, avg=444.20, stdev=243.43, samples=20 00:28:33.218 lat (msec) : 2=0.18%, 4=1.27%, 10=2.91%, 20=4.82%, 50=9.81% 00:28:33.218 lat (msec) : 100=22.14%, 250=44.80%, 500=14.08% 00:28:33.218 cpu : usr=1.46%, sys=1.67%, ctx=2592, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,4504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job2: (groupid=0, jobs=1): err= 0: pid=4182150: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=536, BW=134MiB/s (141MB/s)(1358MiB/10128msec); 0 zone resets 00:28:33.218 slat (usec): min=20, max=64641, avg=1504.97, stdev=4313.61 00:28:33.218 clat (usec): min=1208, max=371063, avg=117765.85, stdev=96721.71 00:28:33.218 lat (usec): min=1244, max=371144, avg=119270.82, stdev=98109.19 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 43], 00:28:33.218 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 73], 60.00th=[ 112], 00:28:33.218 | 70.00th=[ 165], 80.00th=[ 209], 90.00th=[ 271], 95.00th=[ 317], 00:28:33.218 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:28:33.218 | 99.99th=[ 372] 00:28:33.218 bw ( KiB/s): min=47104, max=366592, per=11.20%, avg=137446.40, stdev=102905.56, samples=20 00:28:33.218 iops : min= 184, max= 1432, avg=536.90, stdev=401.97, samples=20 00:28:33.218 lat (msec) : 2=0.06%, 4=0.37%, 10=2.10%, 20=4.27%, 50=36.62% 00:28:33.218 lat (msec) : 100=13.20%, 250=30.98%, 500=12.41% 00:28:33.218 cpu : usr=1.67%, sys=1.92%, ctx=2682, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,5432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job3: (groupid=0, jobs=1): err= 0: pid=4182151: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=441, BW=110MiB/s (116MB/s)(1118MiB/10132msec); 0 zone resets 00:28:33.218 slat (usec): min=16, max=123251, avg=1734.15, stdev=5710.35 00:28:33.218 clat (usec): min=1047, max=447457, avg=143231.14, stdev=110709.80 00:28:33.218 lat (usec): min=1122, max=447499, avg=144965.29, stdev=112182.02 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 27], 00:28:33.218 | 30.00th=[ 52], 40.00th=[ 86], 50.00th=[ 150], 60.00th=[ 176], 00:28:33.218 | 70.00th=[ 197], 80.00th=[ 234], 90.00th=[ 296], 95.00th=[ 363], 00:28:33.218 | 99.00th=[ 422], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 447], 00:28:33.218 | 99.99th=[ 447] 00:28:33.218 bw ( KiB/s): min=32768, max=269312, per=9.20%, avg=112852.40, stdev=63329.65, samples=20 00:28:33.218 iops : min= 128, max= 1052, avg=440.80, stdev=247.40, samples=20 00:28:33.218 lat (msec) : 2=0.49%, 4=1.32%, 10=5.05%, 20=8.59%, 50=11.97% 00:28:33.218 lat (msec) : 100=15.16%, 250=40.98%, 500=16.44% 00:28:33.218 cpu : usr=1.46%, sys=1.51%, ctx=2480, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,4471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job4: (groupid=0, jobs=1): err= 0: pid=4182152: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=683, BW=171MiB/s (179MB/s)(1742MiB/10195msec); 0 zone resets 00:28:33.218 slat (usec): min=21, max=44214, avg=1354.92, stdev=3054.08 00:28:33.218 clat (usec): min=1349, max=415616, avg=92233.42, stdev=61278.16 00:28:33.218 lat (usec): min=1393, max=415647, avg=93588.34, stdev=62012.30 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 46], 00:28:33.218 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 74], 60.00th=[ 88], 00:28:33.218 | 70.00th=[ 111], 80.00th=[ 140], 90.00th=[ 188], 95.00th=[ 215], 00:28:33.218 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 388], 99.95th=[ 401], 00:28:33.218 | 99.99th=[ 418] 00:28:33.218 bw ( KiB/s): min=69632, max=390656, per=14.41%, avg=176768.00, stdev=100036.37, samples=20 00:28:33.218 iops : min= 272, max= 1526, avg=690.50, stdev=390.77, samples=20 00:28:33.218 lat (msec) : 2=0.07%, 4=0.27%, 10=0.59%, 20=1.02%, 50=38.03% 00:28:33.218 lat (msec) : 100=24.76%, 250=33.51%, 500=1.75% 00:28:33.218 cpu : usr=2.21%, sys=2.46%, ctx=2132, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,6968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job5: (groupid=0, jobs=1): err= 0: pid=4182153: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=395, BW=98.8MiB/s (104MB/s)(1007MiB/10197msec); 0 zone resets 00:28:33.218 slat (usec): min=23, max=48867, avg=1729.68, stdev=4943.51 00:28:33.218 clat (msec): min=2, max=410, avg=160.19, stdev=100.53 00:28:33.218 lat (msec): min=2, max=410, avg=161.92, stdev=101.93 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 31], 20.00th=[ 71], 00:28:33.218 | 30.00th=[ 101], 40.00th=[ 115], 50.00th=[ 144], 60.00th=[ 176], 00:28:33.218 | 70.00th=[ 205], 80.00th=[ 259], 90.00th=[ 321], 95.00th=[ 347], 00:28:33.218 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 405], 99.95th=[ 405], 00:28:33.218 | 99.99th=[ 409] 00:28:33.218 bw ( KiB/s): min=47104, max=193536, per=8.27%, avg=101512.00, stdev=48047.87, samples=20 00:28:33.218 iops : min= 184, max= 756, avg=396.50, stdev=187.70, samples=20 00:28:33.218 lat (msec) : 4=0.22%, 10=2.86%, 20=3.53%, 50=8.74%, 100=14.60% 00:28:33.218 lat (msec) : 250=49.08%, 500=20.98% 00:28:33.218 cpu : usr=1.26%, sys=1.54%, ctx=2360, majf=0, minf=1 00:28:33.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:33.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.218 issued rwts: total=0,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.218 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.218 job6: (groupid=0, jobs=1): err= 0: pid=4182154: Sun Jul 21 08:24:41 2024 00:28:33.218 write: IOPS=468, BW=117MiB/s (123MB/s)(1186MiB/10128msec); 0 zone resets 00:28:33.218 slat (usec): min=22, max=95762, avg=1775.26, stdev=5485.83 00:28:33.218 clat (usec): min=1453, max=409738, avg=134430.46, stdev=106809.78 00:28:33.218 lat (usec): min=1492, max=409783, avg=136205.72, stdev=108292.63 00:28:33.218 clat percentiles (msec): 00:28:33.218 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 28], 20.00th=[ 42], 00:28:33.219 | 30.00th=[ 49], 40.00th=[ 71], 50.00th=[ 100], 60.00th=[ 142], 00:28:33.219 | 70.00th=[ 167], 80.00th=[ 224], 90.00th=[ 330], 95.00th=[ 359], 00:28:33.219 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 409], 00:28:33.219 | 99.99th=[ 409] 00:28:33.219 bw ( KiB/s): min=45056, max=287744, per=9.77%, avg=119827.35, stdev=73057.65, samples=20 00:28:33.219 iops : min= 176, max= 1124, avg=468.05, stdev=285.35, samples=20 00:28:33.219 lat (msec) : 2=0.06%, 4=0.17%, 10=1.54%, 20=2.95%, 50=26.99% 00:28:33.219 lat (msec) : 100=18.36%, 250=33.46%, 500=16.47% 00:28:33.219 cpu : usr=1.45%, sys=1.63%, ctx=2175, majf=0, minf=1 00:28:33.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.219 issued rwts: total=0,4743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.219 job7: (groupid=0, jobs=1): err= 0: pid=4182155: Sun Jul 21 08:24:41 2024 00:28:33.219 write: IOPS=491, BW=123MiB/s (129MB/s)(1234MiB/10040msec); 0 zone resets 00:28:33.219 slat (usec): min=18, max=94365, avg=1541.91, stdev=5063.71 00:28:33.219 clat (usec): min=1470, max=389941, avg=128547.67, stdev=107792.65 00:28:33.219 lat (usec): min=1841, max=396235, avg=130089.58, stdev=109294.85 00:28:33.219 clat percentiles (msec): 00:28:33.219 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 16], 20.00th=[ 31], 00:28:33.219 | 30.00th=[ 43], 40.00th=[ 50], 50.00th=[ 104], 60.00th=[ 148], 00:28:33.219 | 70.00th=[ 184], 80.00th=[ 220], 90.00th=[ 309], 95.00th=[ 338], 00:28:33.219 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 388], 00:28:33.219 | 99.99th=[ 388] 00:28:33.219 bw ( KiB/s): min=47104, max=394752, per=10.17%, avg=124782.30, stdev=93317.23, samples=20 00:28:33.219 iops : min= 184, max= 1542, avg=487.40, stdev=364.54, samples=20 00:28:33.219 lat (msec) : 2=0.08%, 4=0.63%, 10=4.86%, 20=7.53%, 50=27.26% 00:28:33.219 lat (msec) : 100=8.95%, 250=34.29%, 500=16.39% 00:28:33.219 cpu : usr=1.64%, sys=1.49%, ctx=2943, majf=0, minf=1 00:28:33.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.219 issued rwts: total=0,4937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.219 job8: (groupid=0, jobs=1): err= 0: pid=4182156: Sun Jul 21 08:24:41 2024 00:28:33.219 write: IOPS=307, BW=76.9MiB/s (80.6MB/s)(784MiB/10195msec); 0 zone resets 00:28:33.219 slat (usec): min=23, max=55921, avg=2885.02, stdev=6475.96 00:28:33.219 clat (msec): min=2, max=448, avg=205.18, stdev=98.91 00:28:33.219 lat (msec): min=2, max=448, avg=208.06, stdev=100.47 00:28:33.219 clat percentiles (msec): 00:28:33.219 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 70], 20.00th=[ 102], 00:28:33.219 | 30.00th=[ 155], 40.00th=[ 178], 50.00th=[ 211], 60.00th=[ 243], 00:28:33.219 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 330], 95.00th=[ 347], 00:28:33.219 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 447], 99.95th=[ 447], 00:28:33.219 | 99.99th=[ 447] 00:28:33.219 bw ( KiB/s): min=47104, max=179712, per=6.41%, avg=78617.60, stdev=36128.13, samples=20 00:28:33.219 iops : min= 184, max= 702, avg=307.10, stdev=141.13, samples=20 00:28:33.219 lat (msec) : 4=0.06%, 10=2.30%, 20=0.86%, 50=3.83%, 100=12.67% 00:28:33.219 lat (msec) : 250=42.12%, 500=38.16% 00:28:33.219 cpu : usr=0.86%, sys=1.09%, ctx=1416, majf=0, minf=1 00:28:33.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.219 issued rwts: total=0,3134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.219 job9: (groupid=0, jobs=1): err= 0: pid=4182157: Sun Jul 21 08:24:41 2024 00:28:33.219 write: IOPS=323, BW=80.9MiB/s (84.9MB/s)(826MiB/10199msec); 0 zone resets 00:28:33.219 slat (usec): min=18, max=122572, avg=2204.48, stdev=6758.27 00:28:33.219 clat (usec): min=1079, max=416881, avg=195333.42, stdev=108938.26 00:28:33.219 lat (usec): min=1115, max=416906, avg=197537.90, stdev=110310.85 00:28:33.219 clat percentiles (msec): 00:28:33.219 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 21], 20.00th=[ 91], 00:28:33.219 | 30.00th=[ 146], 40.00th=[ 174], 50.00th=[ 201], 60.00th=[ 226], 00:28:33.219 | 70.00th=[ 266], 80.00th=[ 300], 90.00th=[ 347], 95.00th=[ 363], 00:28:33.219 | 99.00th=[ 380], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 418], 00:28:33.219 | 99.99th=[ 418] 00:28:33.219 bw ( KiB/s): min=47104, max=160256, per=6.76%, avg=82899.10, stdev=33162.82, samples=20 00:28:33.219 iops : min= 184, max= 626, avg=323.80, stdev=129.56, samples=20 00:28:33.219 lat (msec) : 2=0.48%, 4=1.57%, 10=5.03%, 20=2.85%, 50=4.03% 00:28:33.219 lat (msec) : 100=8.93%, 250=45.25%, 500=31.86% 00:28:33.219 cpu : usr=0.92%, sys=1.12%, ctx=1812, majf=0, minf=1 00:28:33.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:28:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.219 issued rwts: total=0,3302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.219 job10: (groupid=0, jobs=1): err= 0: pid=4182165: Sun Jul 21 08:24:41 2024 00:28:33.219 write: IOPS=412, BW=103MiB/s (108MB/s)(1051MiB/10196msec); 0 zone resets 00:28:33.219 slat (usec): min=16, max=76727, avg=2093.66, stdev=5162.95 00:28:33.219 clat (usec): min=1421, max=415341, avg=153105.52, stdev=95010.60 00:28:33.219 lat (usec): min=1512, max=415372, avg=155199.18, stdev=96194.90 00:28:33.219 clat percentiles (msec): 00:28:33.219 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 45], 20.00th=[ 78], 00:28:33.219 | 30.00th=[ 91], 40.00th=[ 109], 50.00th=[ 133], 60.00th=[ 161], 00:28:33.219 | 70.00th=[ 194], 80.00th=[ 247], 90.00th=[ 300], 95.00th=[ 330], 00:28:33.219 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 401], 99.95th=[ 401], 00:28:33.219 | 99.99th=[ 418] 00:28:33.219 bw ( KiB/s): min=49152, max=181760, per=8.64%, avg=105949.10, stdev=42979.39, samples=20 00:28:33.219 iops : min= 192, max= 710, avg=413.85, stdev=167.87, samples=20 00:28:33.219 lat (msec) : 2=0.26%, 4=1.59%, 10=4.28%, 20=1.93%, 50=3.07% 00:28:33.219 lat (msec) : 100=24.89%, 250=44.19%, 500=19.78% 00:28:33.219 cpu : usr=1.31%, sys=1.43%, ctx=1757, majf=0, minf=1 00:28:33.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:33.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:33.219 issued rwts: total=0,4202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:33.219 00:28:33.219 Run status group 0 (all jobs): 00:28:33.219 WRITE: bw=1198MiB/s (1256MB/s), 76.9MiB/s-171MiB/s (80.6MB/s-179MB/s), io=11.9GiB (12.8GB), run=10040-10199msec 00:28:33.219 00:28:33.219 Disk stats (read/write): 00:28:33.219 nvme0n1: ios=49/6297, merge=0/0, ticks=277/1240912, in_queue=1241189, util=99.69% 00:28:33.219 nvme10n1: ios=41/8733, merge=0/0, ticks=39/1214123, in_queue=1214162, util=97.49% 00:28:33.219 nvme1n1: ios=46/10688, merge=0/0, ticks=77/1211063, in_queue=1211140, util=97.77% 00:28:33.219 nvme2n1: ios=0/8767, merge=0/0, ticks=0/1212574, in_queue=1212574, util=97.74% 00:28:33.219 nvme3n1: ios=42/13915, merge=0/0, ticks=233/1233747, in_queue=1233980, util=99.80% 00:28:33.219 nvme4n1: ios=0/8039, merge=0/0, ticks=0/1245660, in_queue=1245660, util=98.20% 00:28:33.219 nvme5n1: ios=39/9316, merge=0/0, ticks=1241/1193377, in_queue=1194618, util=100.00% 00:28:33.219 nvme6n1: ios=43/9412, merge=0/0, ticks=2604/1220965, in_queue=1223569, util=100.00% 00:28:33.219 nvme7n1: ios=0/6247, merge=0/0, ticks=0/1235655, in_queue=1235655, util=98.78% 00:28:33.219 nvme8n1: ios=40/6570, merge=0/0, ticks=1362/1235067, in_queue=1236429, util=100.00% 00:28:33.219 nvme9n1: ios=44/8380, merge=0/0, ticks=525/1236563, in_queue=1237088, util=100.00% 00:28:33.219 08:24:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:33.219 08:24:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:33.219 08:24:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.219 08:24:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:33.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:33.219 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:33.219 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.219 08:24:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:33.477 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.477 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:33.737 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.737 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:33.993 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.994 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:34.251 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:34.251 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:34.251 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:34.251 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:34.251 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:34.251 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:34.252 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:34.252 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:34.517 08:24:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:34.517 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:34.517 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:34.783 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:34.783 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.783 rmmod nvme_tcp 00:28:34.783 rmmod nvme_fabrics 00:28:34.783 rmmod nvme_keyring 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4176145 ']' 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4176145 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 4176145 ']' 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 4176145 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176145 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176145' 00:28:34.783 killing process with pid 4176145 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 4176145 00:28:34.783 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 4176145 00:28:35.348 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.348 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.348 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.349 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.349 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.349 08:24:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.349 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.349 08:24:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.884 08:24:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.884 00:28:37.884 real 1m0.603s 00:28:37.884 user 3m26.122s 00:28:37.884 sys 0m22.968s 00:28:37.884 08:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.884 08:24:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:37.884 ************************************ 00:28:37.884 END TEST nvmf_multiconnection 00:28:37.884 ************************************ 00:28:37.884 08:24:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:37.884 08:24:47 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:37.884 08:24:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:37.884 08:24:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.884 08:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.884 ************************************ 00:28:37.884 START TEST nvmf_initiator_timeout 00:28:37.884 ************************************ 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:37.884 * Looking for test storage... 00:28:37.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.884 08:24:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.788 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:39.789 00:28:39.789 --- 10.0.0.2 ping statistics --- 00:28:39.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.789 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:39.789 00:28:39.789 --- 10.0.0.1 ping statistics --- 00:28:39.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.789 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4185614 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4185614 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 4185614 ']' 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.789 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.789 [2024-07-21 08:24:49.352628] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:28:39.789 [2024-07-21 08:24:49.352730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.789 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.789 [2024-07-21 08:24:49.417248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.048 [2024-07-21 08:24:49.505670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.048 [2024-07-21 08:24:49.505748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.048 [2024-07-21 08:24:49.505769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.048 [2024-07-21 08:24:49.505781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.048 [2024-07-21 08:24:49.505792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.048 [2024-07-21 08:24:49.505851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.048 [2024-07-21 08:24:49.505887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.048 [2024-07-21 08:24:49.505957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.048 [2024-07-21 08:24:49.505959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.048 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 Malloc0 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 Delay0 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 [2024-07-21 08:24:49.693183] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.306 [2024-07-21 08:24:49.721451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.306 08:24:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:40.871 08:24:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:40.871 08:24:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:40.871 08:24:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:40.871 08:24:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:40.871 08:24:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4186031 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:43.407 08:24:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:43.407 [global] 00:28:43.407 thread=1 00:28:43.407 invalidate=1 00:28:43.407 rw=write 00:28:43.407 time_based=1 00:28:43.407 runtime=60 00:28:43.407 ioengine=libaio 00:28:43.407 direct=1 00:28:43.407 bs=4096 00:28:43.407 iodepth=1 00:28:43.407 norandommap=0 00:28:43.407 numjobs=1 00:28:43.407 00:28:43.407 verify_dump=1 00:28:43.407 verify_backlog=512 00:28:43.407 verify_state_save=0 00:28:43.407 do_verify=1 00:28:43.407 verify=crc32c-intel 00:28:43.407 [job0] 00:28:43.407 filename=/dev/nvme0n1 00:28:43.407 Could not set queue depth (nvme0n1) 00:28:43.407 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:43.407 fio-3.35 00:28:43.407 Starting 1 thread 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:45.936 true 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:45.936 true 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:45.936 true 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:45.936 true 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.936 08:24:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 true 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 true 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 true 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.217 true 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:49.217 08:24:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4186031 00:29:45.510 00:29:45.510 job0: (groupid=0, jobs=1): err= 0: pid=4186118: Sun Jul 21 08:25:52 2024 00:29:45.510 read: IOPS=153, BW=614KiB/s (629kB/s)(36.0MiB/60001msec) 00:29:45.510 slat (nsec): min=5367, max=81686, avg=11361.29, stdev=6826.21 00:29:45.510 clat (usec): min=236, max=41946, avg=1723.25, stdev=7457.65 00:29:45.510 lat (usec): min=243, max=41982, avg=1734.62, stdev=7459.72 00:29:45.510 clat percentiles (usec): 00:29:45.510 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:29:45.510 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:29:45.510 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 553], 00:29:45.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:45.510 | 99.99th=[42206] 00:29:45.510 write: IOPS=160, BW=641KiB/s (656kB/s)(37.5MiB/60001msec); 0 zone resets 00:29:45.510 slat (usec): min=7, max=24757, avg=20.71, stdev=282.30 00:29:45.510 clat (usec): min=176, max=41271k, avg=4550.63, stdev=420980.13 00:29:45.510 lat (usec): min=184, max=41271k, avg=4571.35, stdev=420980.28 00:29:45.510 clat percentiles (usec): 00:29:45.510 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 00:29:45.510 | 20.00th=[ 212], 30.00th=[ 223], 40.00th=[ 233], 00:29:45.510 | 50.00th=[ 241], 60.00th=[ 255], 70.00th=[ 281], 00:29:45.510 | 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 367], 00:29:45.510 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 461], 00:29:45.510 | 99.95th=[ 469], 99.99th=[17112761] 00:29:45.510 bw ( KiB/s): min= 240, max= 8192, per=100.00%, avg=5356.31, stdev=2534.39, samples=13 00:29:45.510 iops : min= 60, max= 2048, avg=1339.08, stdev=633.60, samples=13 00:29:45.510 lat (usec) : 250=29.84%, 500=67.28%, 750=1.13%, 1000=0.02% 00:29:45.510 lat (msec) : 2=0.03%, 10=0.01%, 50=1.70%, >=2000=0.01% 00:29:45.510 cpu : usr=0.33%, sys=0.59%, ctx=18833, majf=0, minf=2 00:29:45.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.510 issued rwts: total=9216,9611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:45.510 00:29:45.510 Run status group 0 (all jobs): 00:29:45.510 READ: bw=614KiB/s (629kB/s), 614KiB/s-614KiB/s (629kB/s-629kB/s), io=36.0MiB (37.7MB), run=60001-60001msec 00:29:45.510 WRITE: bw=641KiB/s (656kB/s), 641KiB/s-641KiB/s (656kB/s-656kB/s), io=37.5MiB (39.4MB), run=60001-60001msec 00:29:45.510 00:29:45.510 Disk stats (read/write): 00:29:45.510 nvme0n1: ios=9243/9216, merge=0/0, ticks=17065/2219, in_queue=19284, util=99.89% 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:45.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:45.510 nvmf hotplug test: fio successful as expected 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.510 rmmod nvme_tcp 00:29:45.510 rmmod nvme_fabrics 00:29:45.510 rmmod nvme_keyring 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:29:45.510 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4185614 ']' 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4185614 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 4185614 ']' 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 4185614 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4185614 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4185614' 00:29:45.511 killing process with pid 4185614 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 4185614 00:29:45.511 08:25:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 4185614 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.511 08:25:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.768 08:25:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.768 00:29:45.768 real 1m8.227s 00:29:45.768 user 4m10.379s 00:29:45.768 sys 0m7.053s 00:29:45.768 08:25:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.768 08:25:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:45.768 ************************************ 00:29:45.768 END TEST nvmf_initiator_timeout 00:29:45.768 ************************************ 00:29:45.768 08:25:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:45.768 08:25:55 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:29:45.768 08:25:55 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:29:45.768 08:25:55 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:29:45.768 08:25:55 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.768 08:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:47.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:47.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.668 08:25:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:47.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:47.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:29:47.669 08:25:57 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:47.669 08:25:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:47.669 08:25:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.669 08:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.669 ************************************ 00:29:47.669 START TEST nvmf_perf_adq 00:29:47.669 ************************************ 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:47.669 * Looking for test storage... 00:29:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.669 08:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.574 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:49.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:49.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.833 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:49.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:49.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:49.834 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:50.402 08:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:52.319 08:26:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.595 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:57.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:57.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:57.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:57.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:29:57.596 00:29:57.596 --- 10.0.0.2 ping statistics --- 00:29:57.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.596 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:29:57.596 00:29:57.596 --- 10.0.0.1 ping statistics --- 00:29:57.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.596 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.596 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3871 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3871 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3871 ']' 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.597 08:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.597 [2024-07-21 08:26:07.014865] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:29:57.597 [2024-07-21 08:26:07.014962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.597 [2024-07-21 08:26:07.084804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.597 [2024-07-21 08:26:07.179837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.597 [2024-07-21 08:26:07.179903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.597 [2024-07-21 08:26:07.179930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.597 [2024-07-21 08:26:07.179943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.597 [2024-07-21 08:26:07.179955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.597 [2024-07-21 08:26:07.180028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.597 [2024-07-21 08:26:07.180092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.597 [2024-07-21 08:26:07.180143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.597 [2024-07-21 08:26:07.180146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:57.855 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.856 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.856 [2024-07-21 08:26:07.476257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.856 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.856 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:57.856 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.856 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:58.114 Malloc1 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:58.114 [2024-07-21 08:26:07.527496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3907 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:58.114 08:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:58.114 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:00.036 "tick_rate": 2700000000, 00:30:00.036 "poll_groups": [ 00:30:00.036 { 00:30:00.036 "name": "nvmf_tgt_poll_group_000", 00:30:00.036 "admin_qpairs": 1, 00:30:00.036 "io_qpairs": 1, 00:30:00.036 "current_admin_qpairs": 1, 00:30:00.036 "current_io_qpairs": 1, 00:30:00.036 "pending_bdev_io": 0, 00:30:00.036 "completed_nvme_io": 18637, 00:30:00.036 "transports": [ 00:30:00.036 { 00:30:00.036 "trtype": "TCP" 00:30:00.036 } 00:30:00.036 ] 00:30:00.036 }, 00:30:00.036 { 00:30:00.036 "name": "nvmf_tgt_poll_group_001", 00:30:00.036 "admin_qpairs": 0, 00:30:00.036 "io_qpairs": 1, 00:30:00.036 "current_admin_qpairs": 0, 00:30:00.036 "current_io_qpairs": 1, 00:30:00.036 "pending_bdev_io": 0, 00:30:00.036 "completed_nvme_io": 20771, 00:30:00.036 "transports": [ 00:30:00.036 { 00:30:00.036 "trtype": "TCP" 00:30:00.036 } 00:30:00.036 ] 00:30:00.036 }, 00:30:00.036 { 00:30:00.036 "name": "nvmf_tgt_poll_group_002", 00:30:00.036 "admin_qpairs": 0, 00:30:00.036 "io_qpairs": 1, 00:30:00.036 "current_admin_qpairs": 0, 00:30:00.036 "current_io_qpairs": 1, 00:30:00.036 "pending_bdev_io": 0, 00:30:00.036 "completed_nvme_io": 20941, 00:30:00.036 "transports": [ 00:30:00.036 { 00:30:00.036 "trtype": "TCP" 00:30:00.036 } 00:30:00.036 ] 00:30:00.036 }, 00:30:00.036 { 00:30:00.036 "name": "nvmf_tgt_poll_group_003", 00:30:00.036 "admin_qpairs": 0, 00:30:00.036 "io_qpairs": 1, 00:30:00.036 "current_admin_qpairs": 0, 00:30:00.036 "current_io_qpairs": 1, 00:30:00.036 "pending_bdev_io": 0, 00:30:00.036 "completed_nvme_io": 20861, 00:30:00.036 "transports": [ 00:30:00.036 { 00:30:00.036 "trtype": "TCP" 00:30:00.036 } 00:30:00.036 ] 00:30:00.036 } 00:30:00.036 ] 00:30:00.036 }' 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:00.036 08:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3907 00:30:08.189 Initializing NVMe Controllers 00:30:08.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:08.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:08.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:08.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:08.189 Initialization complete. Launching workers. 00:30:08.189 ======================================================== 00:30:08.189 Latency(us) 00:30:08.189 Device Information : IOPS MiB/s Average min max 00:30:08.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9773.20 38.18 6549.28 2650.47 9802.14 00:30:08.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10904.90 42.60 5870.35 2391.05 8407.81 00:30:08.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11002.80 42.98 5818.64 3038.68 8238.44 00:30:08.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10931.70 42.70 5854.33 2791.44 8577.72 00:30:08.189 ======================================================== 00:30:08.189 Total : 42612.58 166.46 6008.60 2391.05 9802.14 00:30:08.189 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.189 rmmod nvme_tcp 00:30:08.189 rmmod nvme_fabrics 00:30:08.189 rmmod nvme_keyring 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3871 ']' 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3871 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3871 ']' 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3871 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3871 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3871' 00:30:08.189 killing process with pid 3871 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3871 00:30:08.189 08:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3871 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.447 08:26:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.000 08:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.000 08:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:30:11.000 08:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:11.258 08:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:13.165 08:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.436 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.436 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.436 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:30:18.437 00:30:18.437 --- 10.0.0.2 ping statistics --- 00:30:18.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.437 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:18.437 00:30:18.437 --- 10.0.0.1 ping statistics --- 00:30:18.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.437 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:18.437 net.core.busy_poll = 1 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:18.437 net.core.busy_read = 1 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=6517 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 6517 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 6517 ']' 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.437 08:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.437 [2024-07-21 08:26:28.037178] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:18.437 [2024-07-21 08:26:28.037271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.694 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.694 [2024-07-21 08:26:28.110232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.694 [2024-07-21 08:26:28.202876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.694 [2024-07-21 08:26:28.202935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.694 [2024-07-21 08:26:28.202950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.694 [2024-07-21 08:26:28.202963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.694 [2024-07-21 08:26:28.202975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.694 [2024-07-21 08:26:28.203035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.694 [2024-07-21 08:26:28.203093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.694 [2024-07-21 08:26:28.203129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.694 [2024-07-21 08:26:28.203133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.694 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 [2024-07-21 08:26:28.453261] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 Malloc1 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:18.950 [2024-07-21 08:26:28.504145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=6667 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:18.950 08:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:30:18.950 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:30:21.478 "tick_rate": 2700000000, 00:30:21.478 "poll_groups": [ 00:30:21.478 { 00:30:21.478 "name": "nvmf_tgt_poll_group_000", 00:30:21.478 "admin_qpairs": 1, 00:30:21.478 "io_qpairs": 1, 00:30:21.478 "current_admin_qpairs": 1, 00:30:21.478 "current_io_qpairs": 1, 00:30:21.478 "pending_bdev_io": 0, 00:30:21.478 "completed_nvme_io": 26050, 00:30:21.478 "transports": [ 00:30:21.478 { 00:30:21.478 "trtype": "TCP" 00:30:21.478 } 00:30:21.478 ] 00:30:21.478 }, 00:30:21.478 { 00:30:21.478 "name": "nvmf_tgt_poll_group_001", 00:30:21.478 "admin_qpairs": 0, 00:30:21.478 "io_qpairs": 3, 00:30:21.478 "current_admin_qpairs": 0, 00:30:21.478 "current_io_qpairs": 3, 00:30:21.478 "pending_bdev_io": 0, 00:30:21.478 "completed_nvme_io": 26116, 00:30:21.478 "transports": [ 00:30:21.478 { 00:30:21.478 "trtype": "TCP" 00:30:21.478 } 00:30:21.478 ] 00:30:21.478 }, 00:30:21.478 { 00:30:21.478 "name": "nvmf_tgt_poll_group_002", 00:30:21.478 "admin_qpairs": 0, 00:30:21.478 "io_qpairs": 0, 00:30:21.478 "current_admin_qpairs": 0, 00:30:21.478 "current_io_qpairs": 0, 00:30:21.478 "pending_bdev_io": 0, 00:30:21.478 "completed_nvme_io": 0, 00:30:21.478 "transports": [ 00:30:21.478 { 00:30:21.478 "trtype": "TCP" 00:30:21.478 } 00:30:21.478 ] 00:30:21.478 }, 00:30:21.478 { 00:30:21.478 "name": "nvmf_tgt_poll_group_003", 00:30:21.478 "admin_qpairs": 0, 00:30:21.478 "io_qpairs": 0, 00:30:21.478 "current_admin_qpairs": 0, 00:30:21.478 "current_io_qpairs": 0, 00:30:21.478 "pending_bdev_io": 0, 00:30:21.478 "completed_nvme_io": 0, 00:30:21.478 "transports": [ 00:30:21.478 { 00:30:21.478 "trtype": "TCP" 00:30:21.478 } 00:30:21.478 ] 00:30:21.478 } 00:30:21.478 ] 00:30:21.478 }' 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:30:21.478 08:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 6667 00:30:29.594 Initializing NVMe Controllers 00:30:29.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:29.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:29.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:29.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:29.594 Initialization complete. Launching workers. 00:30:29.594 ======================================================== 00:30:29.594 Latency(us) 00:30:29.594 Device Information : IOPS MiB/s Average min max 00:30:29.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4195.60 16.39 15264.70 2974.86 61429.57 00:30:29.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4882.30 19.07 13118.03 2121.21 61740.79 00:30:29.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4614.40 18.02 13878.66 1820.24 62865.27 00:30:29.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13728.60 53.63 4662.43 1593.69 6661.85 00:30:29.594 ======================================================== 00:30:29.595 Total : 27420.90 107.11 9341.09 1593.69 62865.27 00:30:29.595 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:29.595 rmmod nvme_tcp 00:30:29.595 rmmod nvme_fabrics 00:30:29.595 rmmod nvme_keyring 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 6517 ']' 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 6517 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 6517 ']' 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 6517 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 6517 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 6517' 00:30:29.595 killing process with pid 6517 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 6517 00:30:29.595 08:26:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 6517 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.595 08:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.897 08:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.897 08:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:32.897 00:30:32.897 real 0m44.884s 00:30:32.897 user 2m38.339s 00:30:32.897 sys 0m9.972s 00:30:32.897 08:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.897 08:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.897 ************************************ 00:30:32.897 END TEST nvmf_perf_adq 00:30:32.897 ************************************ 00:30:32.897 08:26:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:32.897 08:26:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:32.897 08:26:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:32.897 08:26:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.897 08:26:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.897 ************************************ 00:30:32.897 START TEST nvmf_shutdown 00:30:32.897 ************************************ 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:32.897 * Looking for test storage... 00:30:32.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.897 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.898 ************************************ 00:30:32.898 START TEST nvmf_shutdown_tc1 00:30:32.898 ************************************ 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.898 08:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:34.858 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:34.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:34.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:34.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:34.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:34.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:30:34.859 00:30:34.859 --- 10.0.0.2 ping statistics --- 00:30:34.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.859 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:30:34.859 00:30:34.859 --- 10.0.0.1 ping statistics --- 00:30:34.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.859 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=9854 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 9854 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 9854 ']' 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.859 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:34.859 [2024-07-21 08:26:44.264229] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:34.859 [2024-07-21 08:26:44.264317] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.859 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.859 [2024-07-21 08:26:44.328873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.859 [2024-07-21 08:26:44.414326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.859 [2024-07-21 08:26:44.414380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.859 [2024-07-21 08:26:44.414408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.859 [2024-07-21 08:26:44.414420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.859 [2024-07-21 08:26:44.414429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.859 [2024-07-21 08:26:44.414512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.859 [2024-07-21 08:26:44.414574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.859 [2024-07-21 08:26:44.414633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:34.859 [2024-07-21 08:26:44.414637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.118 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.119 [2024-07-21 08:26:44.552307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.119 08:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.119 Malloc1 00:30:35.119 [2024-07-21 08:26:44.637346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.119 Malloc2 00:30:35.119 Malloc3 00:30:35.378 Malloc4 00:30:35.378 Malloc5 00:30:35.378 Malloc6 00:30:35.378 Malloc7 00:30:35.378 Malloc8 00:30:35.637 Malloc9 00:30:35.637 Malloc10 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=10011 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 10011 /var/tmp/bdevperf.sock 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 10011 ']' 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:35.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.637 }, 00:30:35.637 "method": "bdev_nvme_attach_controller" 00:30:35.637 } 00:30:35.637 EOF 00:30:35.637 )") 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.637 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.637 { 00:30:35.637 "params": { 00:30:35.637 "name": "Nvme$subsystem", 00:30:35.637 "trtype": "$TEST_TRANSPORT", 00:30:35.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.637 "adrfam": "ipv4", 00:30:35.637 "trsvcid": "$NVMF_PORT", 00:30:35.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.637 "hdgst": ${hdgst:-false}, 00:30:35.637 "ddgst": ${ddgst:-false} 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 } 00:30:35.638 EOF 00:30:35.638 )") 00:30:35.638 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:35.638 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:35.638 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:35.638 08:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme1", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme2", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme3", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme4", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme5", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme6", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme7", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme8", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme9", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 },{ 00:30:35.638 "params": { 00:30:35.638 "name": "Nvme10", 00:30:35.638 "trtype": "tcp", 00:30:35.638 "traddr": "10.0.0.2", 00:30:35.638 "adrfam": "ipv4", 00:30:35.638 "trsvcid": "4420", 00:30:35.638 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:35.638 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:35.638 "hdgst": false, 00:30:35.638 "ddgst": false 00:30:35.638 }, 00:30:35.638 "method": "bdev_nvme_attach_controller" 00:30:35.638 }' 00:30:35.638 [2024-07-21 08:26:45.152542] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:35.638 [2024-07-21 08:26:45.152648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:35.638 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.638 [2024-07-21 08:26:45.215109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.897 [2024-07-21 08:26:45.303046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 10011 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:37.796 08:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:38.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 10011 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 9854 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.732 "name": "Nvme$subsystem", 00:30:38.732 "trtype": "$TEST_TRANSPORT", 00:30:38.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.732 "adrfam": "ipv4", 00:30:38.732 "trsvcid": "$NVMF_PORT", 00:30:38.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.732 "hdgst": ${hdgst:-false}, 00:30:38.732 "ddgst": ${ddgst:-false} 00:30:38.732 }, 00:30:38.732 "method": "bdev_nvme_attach_controller" 00:30:38.732 } 00:30:38.732 EOF 00:30:38.732 )") 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.732 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.732 { 00:30:38.732 "params": { 00:30:38.733 "name": "Nvme$subsystem", 00:30:38.733 "trtype": "$TEST_TRANSPORT", 00:30:38.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "$NVMF_PORT", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.733 "hdgst": ${hdgst:-false}, 00:30:38.733 "ddgst": ${ddgst:-false} 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 } 00:30:38.733 EOF 00:30:38.733 )") 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.733 { 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme$subsystem", 00:30:38.733 "trtype": "$TEST_TRANSPORT", 00:30:38.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "$NVMF_PORT", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.733 "hdgst": ${hdgst:-false}, 00:30:38.733 "ddgst": ${ddgst:-false} 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 } 00:30:38.733 EOF 00:30:38.733 )") 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:38.733 08:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme1", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme2", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme3", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme4", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme5", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme6", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme7", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme8", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme9", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 },{ 00:30:38.733 "params": { 00:30:38.733 "name": "Nvme10", 00:30:38.733 "trtype": "tcp", 00:30:38.733 "traddr": "10.0.0.2", 00:30:38.733 "adrfam": "ipv4", 00:30:38.733 "trsvcid": "4420", 00:30:38.733 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:38.733 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:38.733 "hdgst": false, 00:30:38.733 "ddgst": false 00:30:38.733 }, 00:30:38.733 "method": "bdev_nvme_attach_controller" 00:30:38.733 }' 00:30:38.733 [2024-07-21 08:26:48.221102] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:38.733 [2024-07-21 08:26:48.221177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10432 ] 00:30:38.733 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.733 [2024-07-21 08:26:48.284937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.992 [2024-07-21 08:26:48.373460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.368 Running I/O for 1 seconds... 00:30:41.745 00:30:41.745 Latency(us) 00:30:41.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.745 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme1n1 : 1.15 223.55 13.97 0.00 0.00 282375.59 22427.88 262532.36 00:30:41.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme2n1 : 1.10 232.92 14.56 0.00 0.00 267464.63 30486.38 239230.67 00:30:41.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme3n1 : 1.05 242.68 15.17 0.00 0.00 251633.21 18932.62 243891.01 00:30:41.745 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme4n1 : 1.09 239.54 14.97 0.00 0.00 245308.49 17670.45 257872.02 00:30:41.745 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme5n1 : 1.18 217.04 13.56 0.00 0.00 273694.15 23107.51 278066.82 00:30:41.745 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme6n1 : 1.15 223.35 13.96 0.00 0.00 260588.47 21651.15 262532.36 00:30:41.745 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme7n1 : 1.19 269.04 16.82 0.00 0.00 213509.46 16990.81 259425.47 00:30:41.745 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme8n1 : 1.13 225.63 14.10 0.00 0.00 248732.82 18155.90 257872.02 00:30:41.745 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme9n1 : 1.19 215.69 13.48 0.00 0.00 257122.99 24078.41 296708.17 00:30:41.745 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.745 Verification LBA range: start 0x0 length 0x400 00:30:41.745 Nvme10n1 : 1.20 267.13 16.70 0.00 0.00 204388.12 7330.32 256318.58 00:30:41.745 =================================================================================================================== 00:30:41.745 Total : 2356.56 147.29 0.00 0.00 248498.10 7330.32 296708.17 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.745 rmmod nvme_tcp 00:30:41.745 rmmod nvme_fabrics 00:30:41.745 rmmod nvme_keyring 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 9854 ']' 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 9854 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 9854 ']' 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 9854 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9854 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9854' 00:30:41.745 killing process with pid 9854 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 9854 00:30:41.745 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 9854 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.311 08:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:44.216 00:30:44.216 real 0m11.602s 00:30:44.216 user 0m33.559s 00:30:44.216 sys 0m3.245s 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:44.216 ************************************ 00:30:44.216 END TEST nvmf_shutdown_tc1 00:30:44.216 ************************************ 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:44.216 ************************************ 00:30:44.216 START TEST nvmf_shutdown_tc2 00:30:44.216 ************************************ 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.216 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:44.475 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:44.475 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.475 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:44.476 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:44.476 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:44.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:30:44.476 00:30:44.476 --- 10.0.0.2 ping statistics --- 00:30:44.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.476 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:44.476 00:30:44.476 --- 10.0.0.1 ping statistics --- 00:30:44.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.476 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.476 08:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=11193 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 11193 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 11193 ']' 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:44.476 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.476 [2024-07-21 08:26:54.065808] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:44.476 [2024-07-21 08:26:54.065882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.476 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.734 [2024-07-21 08:26:54.134098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.734 [2024-07-21 08:26:54.231693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.734 [2024-07-21 08:26:54.231763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.734 [2024-07-21 08:26:54.231780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.734 [2024-07-21 08:26:54.231794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.734 [2024-07-21 08:26:54.231807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.734 [2024-07-21 08:26:54.231871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.734 [2024-07-21 08:26:54.231946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.734 [2024-07-21 08:26:54.232004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:44.734 [2024-07-21 08:26:54.232007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.734 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.734 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:44.734 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:44.734 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:44.734 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.994 [2024-07-21 08:26:54.378505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.994 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.994 Malloc1 00:30:44.994 [2024-07-21 08:26:54.458118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.994 Malloc2 00:30:44.994 Malloc3 00:30:44.994 Malloc4 00:30:45.253 Malloc5 00:30:45.253 Malloc6 00:30:45.253 Malloc7 00:30:45.253 Malloc8 00:30:45.253 Malloc9 00:30:45.253 Malloc10 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=11372 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 11372 /var/tmp/bdevperf.sock 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 11372 ']' 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.511 "hdgst": ${hdgst:-false}, 00:30:45.511 "ddgst": ${ddgst:-false} 00:30:45.511 }, 00:30:45.511 "method": "bdev_nvme_attach_controller" 00:30:45.511 } 00:30:45.511 EOF 00:30:45.511 )") 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.511 "hdgst": ${hdgst:-false}, 00:30:45.511 "ddgst": ${ddgst:-false} 00:30:45.511 }, 00:30:45.511 "method": "bdev_nvme_attach_controller" 00:30:45.511 } 00:30:45.511 EOF 00:30:45.511 )") 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.511 "hdgst": ${hdgst:-false}, 00:30:45.511 "ddgst": ${ddgst:-false} 00:30:45.511 }, 00:30:45.511 "method": "bdev_nvme_attach_controller" 00:30:45.511 } 00:30:45.511 EOF 00:30:45.511 )") 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.511 "hdgst": ${hdgst:-false}, 00:30:45.511 "ddgst": ${ddgst:-false} 00:30:45.511 }, 00:30:45.511 "method": "bdev_nvme_attach_controller" 00:30:45.511 } 00:30:45.511 EOF 00:30:45.511 )") 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.511 "hdgst": ${hdgst:-false}, 00:30:45.511 "ddgst": ${ddgst:-false} 00:30:45.511 }, 00:30:45.511 "method": "bdev_nvme_attach_controller" 00:30:45.511 } 00:30:45.511 EOF 00:30:45.511 )") 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.511 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.511 { 00:30:45.511 "params": { 00:30:45.511 "name": "Nvme$subsystem", 00:30:45.511 "trtype": "$TEST_TRANSPORT", 00:30:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.511 "adrfam": "ipv4", 00:30:45.511 "trsvcid": "$NVMF_PORT", 00:30:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.512 "hdgst": ${hdgst:-false}, 00:30:45.512 "ddgst": ${ddgst:-false} 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 } 00:30:45.512 EOF 00:30:45.512 )") 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.512 { 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme$subsystem", 00:30:45.512 "trtype": "$TEST_TRANSPORT", 00:30:45.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "$NVMF_PORT", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.512 "hdgst": ${hdgst:-false}, 00:30:45.512 "ddgst": ${ddgst:-false} 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 } 00:30:45.512 EOF 00:30:45.512 )") 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.512 { 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme$subsystem", 00:30:45.512 "trtype": "$TEST_TRANSPORT", 00:30:45.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "$NVMF_PORT", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.512 "hdgst": ${hdgst:-false}, 00:30:45.512 "ddgst": ${ddgst:-false} 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 } 00:30:45.512 EOF 00:30:45.512 )") 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.512 { 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme$subsystem", 00:30:45.512 "trtype": "$TEST_TRANSPORT", 00:30:45.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "$NVMF_PORT", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.512 "hdgst": ${hdgst:-false}, 00:30:45.512 "ddgst": ${ddgst:-false} 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 } 00:30:45.512 EOF 00:30:45.512 )") 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.512 { 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme$subsystem", 00:30:45.512 "trtype": "$TEST_TRANSPORT", 00:30:45.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "$NVMF_PORT", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.512 "hdgst": ${hdgst:-false}, 00:30:45.512 "ddgst": ${ddgst:-false} 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 } 00:30:45.512 EOF 00:30:45.512 )") 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:45.512 08:26:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme1", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme2", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme3", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme4", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme5", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme6", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme7", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme8", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme9", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 },{ 00:30:45.512 "params": { 00:30:45.512 "name": "Nvme10", 00:30:45.512 "trtype": "tcp", 00:30:45.512 "traddr": "10.0.0.2", 00:30:45.512 "adrfam": "ipv4", 00:30:45.512 "trsvcid": "4420", 00:30:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:45.512 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:45.512 "hdgst": false, 00:30:45.512 "ddgst": false 00:30:45.512 }, 00:30:45.512 "method": "bdev_nvme_attach_controller" 00:30:45.512 }' 00:30:45.512 [2024-07-21 08:26:54.974476] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:45.512 [2024-07-21 08:26:54.974564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11372 ] 00:30:45.512 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.512 [2024-07-21 08:26:55.037163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.512 [2024-07-21 08:26:55.123581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.412 Running I/O for 10 seconds... 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.412 08:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.412 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:47.412 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:47.412 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:47.671 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.930 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 11372 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 11372 ']' 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 11372 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 11372 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 11372' 00:30:48.188 killing process with pid 11372 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 11372 00:30:48.188 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 11372 00:30:48.188 Received shutdown signal, test time was about 1.033688 seconds 00:30:48.188 00:30:48.188 Latency(us) 00:30:48.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme1n1 : 1.03 247.85 15.49 0.00 0.00 255497.29 18350.08 262532.36 00:30:48.188 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme2n1 : 0.98 194.93 12.18 0.00 0.00 318834.92 20583.16 262532.36 00:30:48.188 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme3n1 : 1.01 256.85 16.05 0.00 0.00 237632.42 21359.88 254765.13 00:30:48.188 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme4n1 : 1.02 251.42 15.71 0.00 0.00 238436.50 20874.43 259425.47 00:30:48.188 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme5n1 : 1.00 192.94 12.06 0.00 0.00 304127.75 20000.62 264085.81 00:30:48.188 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme6n1 : 1.02 250.56 15.66 0.00 0.00 230347.85 18835.53 239230.67 00:30:48.188 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme7n1 : 1.03 249.75 15.61 0.00 0.00 226704.69 21165.70 273406.48 00:30:48.188 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme8n1 : 1.03 248.67 15.54 0.00 0.00 223272.39 19806.44 260978.92 00:30:48.188 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme9n1 : 1.01 190.91 11.93 0.00 0.00 284051.03 18835.53 273406.48 00:30:48.188 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.188 Verification LBA range: start 0x0 length 0x400 00:30:48.188 Nvme10n1 : 1.00 191.96 12.00 0.00 0.00 276471.91 18544.26 290494.39 00:30:48.188 =================================================================================================================== 00:30:48.188 Total : 2275.84 142.24 0.00 0.00 255469.63 18350.08 290494.39 00:30:48.446 08:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:49.388 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 11193 00:30:49.388 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.389 rmmod nvme_tcp 00:30:49.389 rmmod nvme_fabrics 00:30:49.389 rmmod nvme_keyring 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 11193 ']' 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 11193 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 11193 ']' 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 11193 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:49.389 08:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 11193 00:30:49.389 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:49.389 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:49.389 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 11193' 00:30:49.389 killing process with pid 11193 00:30:49.389 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 11193 00:30:49.389 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 11193 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.955 08:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:51.912 00:30:51.912 real 0m7.677s 00:30:51.912 user 0m23.379s 00:30:51.912 sys 0m1.495s 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.912 ************************************ 00:30:51.912 END TEST nvmf_shutdown_tc2 00:30:51.912 ************************************ 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.912 08:27:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:52.170 ************************************ 00:30:52.170 START TEST nvmf_shutdown_tc3 00:30:52.170 ************************************ 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:52.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:52.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:52.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:52.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.170 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:52.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:30:52.171 00:30:52.171 --- 10.0.0.2 ping statistics --- 00:30:52.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.171 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:30:52.171 00:30:52.171 --- 10.0.0.1 ping statistics --- 00:30:52.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.171 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=12377 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 12377 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 12377 ']' 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.171 08:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 [2024-07-21 08:27:01.783428] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:52.171 [2024-07-21 08:27:01.783501] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.430 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.430 [2024-07-21 08:27:01.848480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:52.430 [2024-07-21 08:27:01.938700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.430 [2024-07-21 08:27:01.938752] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.430 [2024-07-21 08:27:01.938775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.430 [2024-07-21 08:27:01.938793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.430 [2024-07-21 08:27:01.938803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.430 [2024-07-21 08:27:01.938883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.430 [2024-07-21 08:27:01.938949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.430 [2024-07-21 08:27:01.939000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.430 [2024-07-21 08:27:01.939002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.687 [2024-07-21 08:27:02.098531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.687 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.688 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:52.688 Malloc1 00:30:52.688 [2024-07-21 08:27:02.188810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.688 Malloc2 00:30:52.688 Malloc3 00:30:52.688 Malloc4 00:30:52.946 Malloc5 00:30:52.946 Malloc6 00:30:52.946 Malloc7 00:30:52.946 Malloc8 00:30:52.946 Malloc9 00:30:53.204 Malloc10 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=12569 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 12569 /var/tmp/bdevperf.sock 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 12569 ']' 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:53.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.204 { 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme$subsystem", 00:30:53.204 "trtype": "$TEST_TRANSPORT", 00:30:53.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "$NVMF_PORT", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.204 "hdgst": ${hdgst:-false}, 00:30:53.204 "ddgst": ${ddgst:-false} 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 } 00:30:53.204 EOF 00:30:53.204 )") 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:53.204 08:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme1", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme2", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme3", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme4", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme5", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme6", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme7", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme8", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme9", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 },{ 00:30:53.204 "params": { 00:30:53.204 "name": "Nvme10", 00:30:53.204 "trtype": "tcp", 00:30:53.204 "traddr": "10.0.0.2", 00:30:53.204 "adrfam": "ipv4", 00:30:53.204 "trsvcid": "4420", 00:30:53.204 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:53.204 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:53.204 "hdgst": false, 00:30:53.204 "ddgst": false 00:30:53.204 }, 00:30:53.204 "method": "bdev_nvme_attach_controller" 00:30:53.204 }' 00:30:53.204 [2024-07-21 08:27:02.693470] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:30:53.204 [2024-07-21 08:27:02.693576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12569 ] 00:30:53.204 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.204 [2024-07-21 08:27:02.756992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.462 [2024-07-21 08:27:02.845053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.361 Running I/O for 10 seconds... 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=18 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 18 -ge 100 ']' 00:30:55.361 08:27:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 12377 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 12377 ']' 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 12377 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 12377 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 12377' 00:30:55.642 killing process with pid 12377 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 12377 00:30:55.642 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 12377 00:30:55.642 [2024-07-21 08:27:05.097669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.642 [2024-07-21 08:27:05.097851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.097989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.098541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09090 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.643 [2024-07-21 08:27:05.101768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.101995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.102303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0bb20 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.105745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09540 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.105774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09540 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.105789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09540 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.644 [2024-07-21 08:27:05.107799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.107996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.108008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.108021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.108033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf099f0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.109988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09ec0 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.645 [2024-07-21 08:27:05.110927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.110948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.110962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.110977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.110990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.111730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a370 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.112926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.112966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.112979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.112993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.646 [2024-07-21 08:27:05.113234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.113775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a840 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0acf0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.115993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.647 [2024-07-21 08:27:05.116124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:30:55.648 id:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:30:55.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11828c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-21 08:27:05.116621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 he state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-21 08:27:05.116637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 he state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180b10 is same [2024-07-21 08:27:05.116652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with twith the state(5) to be set 00:30:55.648 he state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:30:55.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:30:55.648 id:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-21 08:27:05.116752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 he state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:30:55.648 id:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:30:55.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:30:55.648 id:0 cdw10:00000000 cdw11:00000000 00:30:55.648 [2024-07-21 08:27:05.116817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:30:55.648 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.648 [2024-07-21 08:27:05.116832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with t[2024-07-21 08:27:05.116833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301b90 is same he state(5) to be set 00:30:55.648 with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b1c0 is same with the state(5) to be set 00:30:55.648 [2024-07-21 08:27:05.116880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.116901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.116916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.116929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.116944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.116956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.116979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.116994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316bf0 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd517a0 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78610 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c590 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t[2024-07-21 08:27:05.117624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:30:55.649 id:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118a340 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-21 08:27:05.117739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with tid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 he state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-21 08:27:05.117781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with tid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 he state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-21 08:27:05.117795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 he state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t[2024-07-21 08:27:05.117811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:30:55.649 id:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t[2024-07-21 08:27:05.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:30:55.649 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.649 [2024-07-21 08:27:05.117853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.649 [2024-07-21 08:27:05.117866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49ee0 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.649 [2024-07-21 08:27:05.117905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.117993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:1[2024-07-21 08:27:05.118246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 he state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 08:27:05.118311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 he state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:1[2024-07-21 08:27:05.118388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 he state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-21 08:27:05.118404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 he state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t[2024-07-21 08:27:05.118422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:1he state(5) to be set 00:30:55.650 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with t[2024-07-21 08:27:05.118438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:30:55.650 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b670 is same with the state(5) to be set 00:30:55.650 [2024-07-21 08:27:05.118486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.650 [2024-07-21 08:27:05.118583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.650 [2024-07-21 08:27:05.118599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.118977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.118993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.651 [2024-07-21 08:27:05.119812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.651 [2024-07-21 08:27:05.119826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.119841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.119855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.119914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.119958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.119973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.119990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.120281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.120324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.652 [2024-07-21 08:27:05.120400] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd4da20 was disconnected and freed. reset controller. 00:30:55.652 [2024-07-21 08:27:05.122652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.122976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.652 [2024-07-21 08:27:05.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.652 [2024-07-21 08:27:05.123004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.123980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.123994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.124009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.124024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.124040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.124054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.124069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.124083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.124099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.124113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.653 [2024-07-21 08:27:05.124129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.653 [2024-07-21 08:27:05.124143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.654 [2024-07-21 08:27:05.124592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.124692] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8ca50 was disconnected and freed. reset controller. 00:30:55.654 [2024-07-21 08:27:05.124919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:55.654 [2024-07-21 08:27:05.124963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11828c0 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.125034] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.126566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:55.654 [2024-07-21 08:27:05.126601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78610 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.126682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180b10 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.126717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301b90 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.126750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1316bf0 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.126782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd517a0 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.126841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.654 [2024-07-21 08:27:05.126872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.126892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.654 [2024-07-21 08:27:05.126906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.126922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.654 [2024-07-21 08:27:05.126937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.126952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.654 [2024-07-21 08:27:05.126966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.654 [2024-07-21 08:27:05.126979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316a10 is same with the state(5) to be set 00:30:55.654 [2024-07-21 08:27:05.127010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119c590 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.127043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118a340 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.127073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49ee0 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.127855] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.654 [2024-07-21 08:27:05.128039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11828c0 with addr=10.0.0.2, port=4420 00:30:55.654 [2024-07-21 08:27:05.128056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11828c0 is same with the state(5) to be set 00:30:55.654 [2024-07-21 08:27:05.128155] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128232] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128284] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128352] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128698] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128766] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:55.654 [2024-07-21 08:27:05.128943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.654 [2024-07-21 08:27:05.128970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc78610 with addr=10.0.0.2, port=4420 00:30:55.654 [2024-07-21 08:27:05.128987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78610 is same with the state(5) to be set 00:30:55.654 [2024-07-21 08:27:05.129006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11828c0 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.129143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78610 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.129169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:55.654 [2024-07-21 08:27:05.129183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:55.654 [2024-07-21 08:27:05.129199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:55.654 [2024-07-21 08:27:05.129278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.654 [2024-07-21 08:27:05.129302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:55.654 [2024-07-21 08:27:05.129315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:55.654 [2024-07-21 08:27:05.129329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:55.654 [2024-07-21 08:27:05.129388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.654 [2024-07-21 08:27:05.136676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1316a10 (9): Bad file descriptor 00:30:55.654 [2024-07-21 08:27:05.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.136951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.136987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.137977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.137993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.655 [2024-07-21 08:27:05.138007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.655 [2024-07-21 08:27:05.138023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.138921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.138936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12300e0 is same with the state(5) to be set 00:30:55.656 [2024-07-21 08:27:05.140216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.656 [2024-07-21 08:27:05.140540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.656 [2024-07-21 08:27:05.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.140994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.657 [2024-07-21 08:27:05.141214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.657 [2024-07-21 08:27:05.141230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.141979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.141993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.142200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.142214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ee00 is same with the state(5) to be set 00:30:55.658 [2024-07-21 08:27:05.143459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.143482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.143504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.143541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.143555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.143571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.143585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.143601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.658 [2024-07-21 08:27:05.143621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.658 [2024-07-21 08:27:05.143639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.143974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.143989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.659 [2024-07-21 08:27:05.144742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.659 [2024-07-21 08:27:05.144756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.144981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.144995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.145415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.145429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f6a30 is same with the state(5) to be set 00:30:55.660 [2024-07-21 08:27:05.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.146991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.147006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.147020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.147036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.147055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.147072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.147087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.147103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.147117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.660 [2024-07-21 08:27:05.147134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.660 [2024-07-21 08:27:05.147148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.147988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.661 [2024-07-21 08:27:05.148334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.661 [2024-07-21 08:27:05.148350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.148605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.148627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f7eb0 is same with the state(5) to be set 00:30:55.662 [2024-07-21 08:27:05.149882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.149907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.149928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.149944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.149960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.149974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.149990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.662 [2024-07-21 08:27:05.150704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.662 [2024-07-21 08:27:05.150720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.150977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.150993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.151872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5050 is same with the state(5) to be set 00:30:55.663 [2024-07-21 08:27:05.153116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.153141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.153161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.153177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.153192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.153206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.153222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.663 [2024-07-21 08:27:05.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.663 [2024-07-21 08:27:05.153251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.153984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.153998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.664 [2024-07-21 08:27:05.154426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.664 [2024-07-21 08:27:05.154440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.154983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.155012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.155025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.155040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.155054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.155069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c34510 is same with the state(5) to be set 00:30:55.665 [2024-07-21 08:27:05.156330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.665 [2024-07-21 08:27:05.156916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.665 [2024-07-21 08:27:05.156932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.156950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.156967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.156981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.156996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.666 [2024-07-21 08:27:05.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.666 [2024-07-21 08:27:05.157959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.157975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.157989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.158286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.158300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12295b0 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.160401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160561] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.667 [2024-07-21 08:27:05.160589] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.667 [2024-07-21 08:27:05.160609] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.667 [2024-07-21 08:27:05.160648] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.667 [2024-07-21 08:27:05.160674] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.667 [2024-07-21 08:27:05.160791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:55.667 [2024-07-21 08:27:05.160880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:55.667 [2024-07-21 08:27:05.161137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.161167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd49ee0 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.161185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49ee0 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.161295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.161322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118a340 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.161338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118a340 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.161448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.161474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1180b10 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.161490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180b10 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.163397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.163427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119c590 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.163444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c590 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.163532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.163557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1301b90 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.163572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301b90 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.163675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.163701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd517a0 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.163716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd517a0 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.163827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.163851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1316bf0 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.163867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316bf0 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.163999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.667 [2024-07-21 08:27:05.164023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11828c0 with addr=10.0.0.2, port=4420 00:30:55.667 [2024-07-21 08:27:05.164038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11828c0 is same with the state(5) to be set 00:30:55.667 [2024-07-21 08:27:05.164062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49ee0 (9): Bad file descriptor 00:30:55.667 [2024-07-21 08:27:05.164082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118a340 (9): Bad file descriptor 00:30:55.667 [2024-07-21 08:27:05.164100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180b10 (9): Bad file descriptor 00:30:55.667 [2024-07-21 08:27:05.164216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.667 [2024-07-21 08:27:05.164559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.667 [2024-07-21 08:27:05.164574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.164973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.164986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.668 [2024-07-21 08:27:05.165759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.668 [2024-07-21 08:27:05.165772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.165974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.165988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.669 [2024-07-21 08:27:05.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.669 [2024-07-21 08:27:05.166186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1228110 is same with the state(5) to be set 00:30:55.669 [2024-07-21 08:27:05.168126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:55.669 task offset: 19712 on job bdev=Nvme2n1 fails 00:30:55.669 00:30:55.669 Latency(us) 00:30:55.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.669 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme1n1 ended in about 0.67 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme1n1 : 0.67 191.75 11.98 95.88 0.00 219181.01 30098.01 240784.12 00:30:55.669 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme2n1 ended in about 0.65 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme2n1 : 0.65 196.97 12.31 98.48 0.00 207178.90 3373.89 245444.46 00:30:55.669 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme3n1 ended in about 0.67 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme3n1 : 0.67 190.83 11.93 95.41 0.00 208055.06 17767.54 254765.13 00:30:55.669 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme4n1 ended in about 0.67 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme4n1 : 0.67 189.92 11.87 94.96 0.00 203003.32 19126.80 242337.56 00:30:55.669 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme5n1 ended in about 0.68 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme5n1 : 0.68 94.51 5.91 94.51 0.00 297056.33 32234.00 281173.71 00:30:55.669 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme6n1 ended in about 0.68 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme6n1 : 0.68 94.06 5.88 94.06 0.00 289315.65 20874.43 251658.24 00:30:55.669 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme7n1 ended in about 0.65 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme7n1 : 0.65 195.76 12.24 97.88 0.00 177923.54 9466.31 231463.44 00:30:55.669 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme8n1 ended in about 0.68 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme8n1 : 0.68 93.62 5.85 93.62 0.00 273048.08 15340.28 228356.55 00:30:55.669 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme9n1 ended in about 0.69 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme9n1 : 0.69 92.12 5.76 92.12 0.00 269789.11 39807.05 260978.92 00:30:55.669 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.669 Job: Nvme10n1 ended in about 0.69 seconds with error 00:30:55.669 Verification LBA range: start 0x0 length 0x400 00:30:55.669 Nvme10n1 : 0.69 93.18 5.82 93.18 0.00 257248.14 21359.88 281173.71 00:30:55.669 =================================================================================================================== 00:30:55.669 Total : 1432.73 89.55 950.12 0.00 232757.60 3373.89 281173.71 00:30:55.669 [2024-07-21 08:27:05.196137] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:55.669 [2024-07-21 08:27:05.196230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:55.669 [2024-07-21 08:27:05.196315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119c590 (9): Bad file descriptor 00:30:55.669 [2024-07-21 08:27:05.196359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301b90 (9): Bad file descriptor 00:30:55.669 [2024-07-21 08:27:05.196380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd517a0 (9): Bad file descriptor 00:30:55.669 [2024-07-21 08:27:05.196398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1316bf0 (9): Bad file descriptor 00:30:55.669 [2024-07-21 08:27:05.196416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11828c0 (9): Bad file descriptor 00:30:55.669 [2024-07-21 08:27:05.196433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.196446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.669 [2024-07-21 08:27:05.196463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.669 [2024-07-21 08:27:05.196488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.196503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:55.669 [2024-07-21 08:27:05.196516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:55.669 [2024-07-21 08:27:05.196534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.196549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:55.669 [2024-07-21 08:27:05.196561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:55.669 [2024-07-21 08:27:05.196772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.669 [2024-07-21 08:27:05.196797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.669 [2024-07-21 08:27:05.196810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.669 [2024-07-21 08:27:05.197048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.669 [2024-07-21 08:27:05.197082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc78610 with addr=10.0.0.2, port=4420 00:30:55.669 [2024-07-21 08:27:05.197102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78610 is same with the state(5) to be set 00:30:55.669 [2024-07-21 08:27:05.197241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.669 [2024-07-21 08:27:05.197267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1316a10 with addr=10.0.0.2, port=4420 00:30:55.669 [2024-07-21 08:27:05.197284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316a10 is same with the state(5) to be set 00:30:55.669 [2024-07-21 08:27:05.197298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.197311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:55.669 [2024-07-21 08:27:05.197324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:55.669 [2024-07-21 08:27:05.197352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.197368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:55.669 [2024-07-21 08:27:05.197381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:55.669 [2024-07-21 08:27:05.197397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:55.669 [2024-07-21 08:27:05.197411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.197430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:55.670 [2024-07-21 08:27:05.197447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.197461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.197473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:55.670 [2024-07-21 08:27:05.197489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.197503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.197516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:55.670 [2024-07-21 08:27:05.197565] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.670 [2024-07-21 08:27:05.197587] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.670 [2024-07-21 08:27:05.197620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.670 [2024-07-21 08:27:05.197642] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.670 [2024-07-21 08:27:05.197660] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:55.670 [2024-07-21 08:27:05.198003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78610 (9): Bad file descriptor 00:30:55.670 [2024-07-21 08:27:05.198107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1316a10 (9): Bad file descriptor 00:30:55.670 [2024-07-21 08:27:05.198508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:55.670 [2024-07-21 08:27:05.198541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:55.670 [2024-07-21 08:27:05.198582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.198598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.198618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:55.670 [2024-07-21 08:27:05.198647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.198661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.198675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:55.670 [2024-07-21 08:27:05.198712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.670 [2024-07-21 08:27:05.198743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.198883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.670 [2024-07-21 08:27:05.198915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1180b10 with addr=10.0.0.2, port=4420 00:30:55.670 [2024-07-21 08:27:05.198942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180b10 is same with the state(5) to be set 00:30:55.670 [2024-07-21 08:27:05.199060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.670 [2024-07-21 08:27:05.199088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118a340 with addr=10.0.0.2, port=4420 00:30:55.670 [2024-07-21 08:27:05.199105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118a340 is same with the state(5) to be set 00:30:55.670 [2024-07-21 08:27:05.199236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.670 [2024-07-21 08:27:05.199264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd49ee0 with addr=10.0.0.2, port=4420 00:30:55.670 [2024-07-21 08:27:05.199280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49ee0 is same with the state(5) to be set 00:30:55.670 [2024-07-21 08:27:05.199299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1180b10 (9): Bad file descriptor 00:30:55.670 [2024-07-21 08:27:05.199318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118a340 (9): Bad file descriptor 00:30:55.670 [2024-07-21 08:27:05.199362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49ee0 (9): Bad file descriptor 00:30:55.670 [2024-07-21 08:27:05.199384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.199398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.199411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:55.670 [2024-07-21 08:27:05.199427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.199442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.199454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:55.670 [2024-07-21 08:27:05.199492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.199510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.670 [2024-07-21 08:27:05.199522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.670 [2024-07-21 08:27:05.199534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.670 [2024-07-21 08:27:05.199547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.670 [2024-07-21 08:27:05.199587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.236 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:56.236 08:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 12569 00:30:57.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (12569) - No such process 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.173 rmmod nvme_tcp 00:30:57.173 rmmod nvme_fabrics 00:30:57.173 rmmod nvme_keyring 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:57.173 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.174 08:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:59.701 00:30:59.701 real 0m7.155s 00:30:59.701 user 0m17.005s 00:30:59.701 sys 0m1.312s 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:59.701 ************************************ 00:30:59.701 END TEST nvmf_shutdown_tc3 00:30:59.701 ************************************ 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:59.701 00:30:59.701 real 0m26.641s 00:30:59.701 user 1m14.026s 00:30:59.701 sys 0m6.189s 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:59.701 08:27:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:59.701 ************************************ 00:30:59.701 END TEST nvmf_shutdown 00:30:59.701 ************************************ 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:59.701 08:27:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.701 08:27:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.701 08:27:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:59.701 08:27:08 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.701 08:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.701 ************************************ 00:30:59.701 START TEST nvmf_multicontroller 00:30:59.701 ************************************ 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:59.701 * Looking for test storage... 00:30:59.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.701 08:27:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.702 08:27:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.702 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:59.702 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:59.702 08:27:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:59.702 08:27:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:01.602 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.603 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.603 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.603 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.603 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:01.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:31:01.603 00:31:01.603 --- 10.0.0.2 ping statistics --- 00:31:01.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.603 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:31:01.603 00:31:01.603 --- 10.0.0.1 ping statistics --- 00:31:01.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.603 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=15467 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 15467 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 15467 ']' 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.603 08:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.603 [2024-07-21 08:27:10.984959] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:01.603 [2024-07-21 08:27:10.985038] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.603 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.603 [2024-07-21 08:27:11.051707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.603 [2024-07-21 08:27:11.142247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.603 [2024-07-21 08:27:11.142310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.603 [2024-07-21 08:27:11.142326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.604 [2024-07-21 08:27:11.142339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.604 [2024-07-21 08:27:11.142351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.604 [2024-07-21 08:27:11.142452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.604 [2024-07-21 08:27:11.142543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.604 [2024-07-21 08:27:11.142545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 [2024-07-21 08:27:11.273177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 Malloc0 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 [2024-07-21 08:27:11.334384] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 [2024-07-21 08:27:11.342311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 Malloc1 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=15506 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 15506 /var/tmp/bdevperf.sock 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 15506 ']' 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.861 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.862 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.862 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.862 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.119 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.119 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:02.119 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:02.119 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.119 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.379 NVMe0n1 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.379 1 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.379 request: 00:31:02.379 { 00:31:02.379 "name": "NVMe0", 00:31:02.379 "trtype": "tcp", 00:31:02.379 "traddr": "10.0.0.2", 00:31:02.379 "adrfam": "ipv4", 00:31:02.379 "trsvcid": "4420", 00:31:02.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.379 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:02.379 "hostaddr": "10.0.0.2", 00:31:02.379 "hostsvcid": "60000", 00:31:02.379 "prchk_reftag": false, 00:31:02.379 "prchk_guard": false, 00:31:02.379 "hdgst": false, 00:31:02.379 "ddgst": false, 00:31:02.379 "method": "bdev_nvme_attach_controller", 00:31:02.379 "req_id": 1 00:31:02.379 } 00:31:02.379 Got JSON-RPC error response 00:31:02.379 response: 00:31:02.379 { 00:31:02.379 "code": -114, 00:31:02.379 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.379 } 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.379 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.379 request: 00:31:02.379 { 00:31:02.379 "name": "NVMe0", 00:31:02.379 "trtype": "tcp", 00:31:02.379 "traddr": "10.0.0.2", 00:31:02.379 "adrfam": "ipv4", 00:31:02.379 "trsvcid": "4420", 00:31:02.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:02.379 "hostaddr": "10.0.0.2", 00:31:02.379 "hostsvcid": "60000", 00:31:02.379 "prchk_reftag": false, 00:31:02.379 "prchk_guard": false, 00:31:02.379 "hdgst": false, 00:31:02.379 "ddgst": false, 00:31:02.380 "method": "bdev_nvme_attach_controller", 00:31:02.380 "req_id": 1 00:31:02.380 } 00:31:02.380 Got JSON-RPC error response 00:31:02.380 response: 00:31:02.380 { 00:31:02.380 "code": -114, 00:31:02.380 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.380 } 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.380 request: 00:31:02.380 { 00:31:02.380 "name": "NVMe0", 00:31:02.380 "trtype": "tcp", 00:31:02.380 "traddr": "10.0.0.2", 00:31:02.380 "adrfam": "ipv4", 00:31:02.380 "trsvcid": "4420", 00:31:02.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.380 "hostaddr": "10.0.0.2", 00:31:02.380 "hostsvcid": "60000", 00:31:02.380 "prchk_reftag": false, 00:31:02.380 "prchk_guard": false, 00:31:02.380 "hdgst": false, 00:31:02.380 "ddgst": false, 00:31:02.380 "multipath": "disable", 00:31:02.380 "method": "bdev_nvme_attach_controller", 00:31:02.380 "req_id": 1 00:31:02.380 } 00:31:02.380 Got JSON-RPC error response 00:31:02.380 response: 00:31:02.380 { 00:31:02.380 "code": -114, 00:31:02.380 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:31:02.380 } 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.380 request: 00:31:02.380 { 00:31:02.380 "name": "NVMe0", 00:31:02.380 "trtype": "tcp", 00:31:02.380 "traddr": "10.0.0.2", 00:31:02.380 "adrfam": "ipv4", 00:31:02.380 "trsvcid": "4420", 00:31:02.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.380 "hostaddr": "10.0.0.2", 00:31:02.380 "hostsvcid": "60000", 00:31:02.380 "prchk_reftag": false, 00:31:02.380 "prchk_guard": false, 00:31:02.380 "hdgst": false, 00:31:02.380 "ddgst": false, 00:31:02.380 "multipath": "failover", 00:31:02.380 "method": "bdev_nvme_attach_controller", 00:31:02.380 "req_id": 1 00:31:02.380 } 00:31:02.380 Got JSON-RPC error response 00:31:02.380 response: 00:31:02.380 { 00:31:02.380 "code": -114, 00:31:02.380 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.380 } 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.380 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.380 08:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.639 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:02.639 08:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.571 0 00:31:03.571 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:03.571 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.571 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 15506 ']' 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 15506' 00:31:03.829 killing process with pid 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 15506 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:03.829 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:04.087 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:04.087 [2024-07-21 08:27:11.441780] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:04.087 [2024-07-21 08:27:11.441876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15506 ] 00:31:04.087 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.087 [2024-07-21 08:27:11.506114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.087 [2024-07-21 08:27:11.592727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.087 [2024-07-21 08:27:12.051107] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name a87f4b5b-7e8a-4cbd-b916-470370f84dc3 already exists 00:31:04.087 [2024-07-21 08:27:12.051147] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:a87f4b5b-7e8a-4cbd-b916-470370f84dc3 alias for bdev NVMe1n1 00:31:04.087 [2024-07-21 08:27:12.051177] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:04.087 Running I/O for 1 seconds... 00:31:04.087 00:31:04.087 Latency(us) 00:31:04.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.087 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:04.087 NVMe0n1 : 1.01 17035.86 66.55 0.00 0.00 7501.04 6650.69 16602.45 00:31:04.087 =================================================================================================================== 00:31:04.087 Total : 17035.86 66.55 0.00 0.00 7501.04 6650.69 16602.45 00:31:04.087 Received shutdown signal, test time was about 1.000000 seconds 00:31:04.087 00:31:04.087 Latency(us) 00:31:04.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.087 =================================================================================================================== 00:31:04.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.087 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:04.087 rmmod nvme_tcp 00:31:04.087 rmmod nvme_fabrics 00:31:04.087 rmmod nvme_keyring 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 15467 ']' 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 15467 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 15467 ']' 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 15467 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 15467 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 15467' 00:31:04.087 killing process with pid 15467 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 15467 00:31:04.087 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 15467 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.347 08:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.250 08:27:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.250 00:31:06.250 real 0m7.021s 00:31:06.250 user 0m10.481s 00:31:06.250 sys 0m2.233s 00:31:06.250 08:27:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.250 08:27:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.250 ************************************ 00:31:06.250 END TEST nvmf_multicontroller 00:31:06.250 ************************************ 00:31:06.250 08:27:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:06.250 08:27:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:06.250 08:27:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:06.250 08:27:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.250 08:27:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.507 ************************************ 00:31:06.507 START TEST nvmf_aer 00:31:06.507 ************************************ 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:06.507 * Looking for test storage... 00:31:06.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.507 08:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.439 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:08.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:08.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:08.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:08.440 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:08.440 08:27:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:08.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:31:08.440 00:31:08.440 --- 10.0.0.2 ping statistics --- 00:31:08.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.440 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:31:08.440 00:31:08.440 --- 10.0.0.1 ping statistics --- 00:31:08.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.440 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=17699 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 17699 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 17699 ']' 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.440 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 [2024-07-21 08:27:18.107201] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:08.698 [2024-07-21 08:27:18.107285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.698 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.698 [2024-07-21 08:27:18.174874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.698 [2024-07-21 08:27:18.270184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.698 [2024-07-21 08:27:18.270238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.698 [2024-07-21 08:27:18.270255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.698 [2024-07-21 08:27:18.270269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.698 [2024-07-21 08:27:18.270280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.698 [2024-07-21 08:27:18.270376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.698 [2024-07-21 08:27:18.270415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.698 [2024-07-21 08:27:18.270460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.698 [2024-07-21 08:27:18.270463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 [2024-07-21 08:27:18.433510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 Malloc0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 [2024-07-21 08:27:18.487454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 [ 00:31:08.957 { 00:31:08.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:08.957 "subtype": "Discovery", 00:31:08.957 "listen_addresses": [], 00:31:08.957 "allow_any_host": true, 00:31:08.957 "hosts": [] 00:31:08.957 }, 00:31:08.957 { 00:31:08.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.957 "subtype": "NVMe", 00:31:08.957 "listen_addresses": [ 00:31:08.957 { 00:31:08.957 "trtype": "TCP", 00:31:08.957 "adrfam": "IPv4", 00:31:08.957 "traddr": "10.0.0.2", 00:31:08.957 "trsvcid": "4420" 00:31:08.957 } 00:31:08.957 ], 00:31:08.957 "allow_any_host": true, 00:31:08.957 "hosts": [], 00:31:08.957 "serial_number": "SPDK00000000000001", 00:31:08.957 "model_number": "SPDK bdev Controller", 00:31:08.957 "max_namespaces": 2, 00:31:08.957 "min_cntlid": 1, 00:31:08.957 "max_cntlid": 65519, 00:31:08.957 "namespaces": [ 00:31:08.957 { 00:31:08.957 "nsid": 1, 00:31:08.957 "bdev_name": "Malloc0", 00:31:08.957 "name": "Malloc0", 00:31:08.957 "nguid": "116620364EDC4F2887C7DFFC76717908", 00:31:08.957 "uuid": "11662036-4edc-4f28-87c7-dffc76717908" 00:31:08.957 } 00:31:08.957 ] 00:31:08.957 } 00:31:08.957 ] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=17838 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:08.957 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:08.957 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.215 Malloc1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.215 [ 00:31:09.215 { 00:31:09.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:09.215 "subtype": "Discovery", 00:31:09.215 "listen_addresses": [], 00:31:09.215 "allow_any_host": true, 00:31:09.215 "hosts": [] 00:31:09.215 }, 00:31:09.215 { 00:31:09.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.215 "subtype": "NVMe", 00:31:09.215 "listen_addresses": [ 00:31:09.215 { 00:31:09.215 "trtype": "TCP", 00:31:09.215 "adrfam": "IPv4", 00:31:09.215 "traddr": "10.0.0.2", 00:31:09.215 "trsvcid": "4420" 00:31:09.215 } 00:31:09.215 ], 00:31:09.215 "allow_any_host": true, 00:31:09.215 "hosts": [], 00:31:09.215 "serial_number": "SPDK00000000000001", 00:31:09.215 "model_number": "SPDK bdev Controller", 00:31:09.215 "max_namespaces": 2, 00:31:09.215 "min_cntlid": 1, 00:31:09.215 "max_cntlid": 65519, 00:31:09.215 "namespaces": [ 00:31:09.215 { 00:31:09.215 "nsid": 1, 00:31:09.215 "bdev_name": "Malloc0", 00:31:09.215 "name": "Malloc0", 00:31:09.215 "nguid": "116620364EDC4F2887C7DFFC76717908", 00:31:09.215 "uuid": "11662036-4edc-4f28-87c7-dffc76717908" 00:31:09.215 }, 00:31:09.215 { 00:31:09.215 "nsid": 2, 00:31:09.215 "bdev_name": "Malloc1", 00:31:09.215 "name": "Malloc1", 00:31:09.215 "nguid": "5C15D4628BAD4FD8B4F851CA92916374", 00:31:09.215 "uuid": "5c15d462-8bad-4fd8-b4f8-51ca92916374" 00:31:09.215 } 00:31:09.215 ] 00:31:09.215 } 00:31:09.215 ] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 17838 00:31:09.215 Asynchronous Event Request test 00:31:09.215 Attaching to 10.0.0.2 00:31:09.215 Attached to 10.0.0.2 00:31:09.215 Registering asynchronous event callbacks... 00:31:09.215 Starting namespace attribute notice tests for all controllers... 00:31:09.215 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:09.215 aer_cb - Changed Namespace 00:31:09.215 Cleaning up... 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.215 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.473 rmmod nvme_tcp 00:31:09.473 rmmod nvme_fabrics 00:31:09.473 rmmod nvme_keyring 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 17699 ']' 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 17699 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 17699 ']' 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 17699 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 17699 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 17699' 00:31:09.473 killing process with pid 17699 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 17699 00:31:09.473 08:27:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 17699 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.730 08:27:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.646 08:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.646 00:31:11.646 real 0m5.291s 00:31:11.646 user 0m4.080s 00:31:11.646 sys 0m1.889s 00:31:11.646 08:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:11.646 08:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:11.646 ************************************ 00:31:11.646 END TEST nvmf_aer 00:31:11.646 ************************************ 00:31:11.646 08:27:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:11.646 08:27:21 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:11.646 08:27:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:11.646 08:27:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.646 08:27:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.646 ************************************ 00:31:11.646 START TEST nvmf_async_init 00:31:11.646 ************************************ 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:11.646 * Looking for test storage... 00:31:11.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.646 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8c431648caab47928a91c6c7bc11b182 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.906 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.907 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.907 08:27:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.907 08:27:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:13.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:13.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:13.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:13.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.805 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:31:13.806 00:31:13.806 --- 10.0.0.2 ping statistics --- 00:31:13.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.806 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:13.806 00:31:13.806 --- 10.0.0.1 ping statistics --- 00:31:13.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.806 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=19773 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 19773 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 19773 ']' 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.806 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.064 [2024-07-21 08:27:23.446767] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:14.064 [2024-07-21 08:27:23.446842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.064 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.064 [2024-07-21 08:27:23.508692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.064 [2024-07-21 08:27:23.598326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.064 [2024-07-21 08:27:23.598389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.065 [2024-07-21 08:27:23.598406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.065 [2024-07-21 08:27:23.598419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.065 [2024-07-21 08:27:23.598432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.065 [2024-07-21 08:27:23.598462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 [2024-07-21 08:27:23.745602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 null0 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8c431648caab47928a91c6c7bc11b182 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 [2024-07-21 08:27:23.785874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.323 08:27:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.583 nvme0n1 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.583 [ 00:31:14.583 { 00:31:14.583 "name": "nvme0n1", 00:31:14.583 "aliases": [ 00:31:14.583 "8c431648-caab-4792-8a91-c6c7bc11b182" 00:31:14.583 ], 00:31:14.583 "product_name": "NVMe disk", 00:31:14.583 "block_size": 512, 00:31:14.583 "num_blocks": 2097152, 00:31:14.583 "uuid": "8c431648-caab-4792-8a91-c6c7bc11b182", 00:31:14.583 "assigned_rate_limits": { 00:31:14.583 "rw_ios_per_sec": 0, 00:31:14.583 "rw_mbytes_per_sec": 0, 00:31:14.583 "r_mbytes_per_sec": 0, 00:31:14.583 "w_mbytes_per_sec": 0 00:31:14.583 }, 00:31:14.583 "claimed": false, 00:31:14.583 "zoned": false, 00:31:14.583 "supported_io_types": { 00:31:14.583 "read": true, 00:31:14.583 "write": true, 00:31:14.583 "unmap": false, 00:31:14.583 "flush": true, 00:31:14.583 "reset": true, 00:31:14.583 "nvme_admin": true, 00:31:14.583 "nvme_io": true, 00:31:14.583 "nvme_io_md": false, 00:31:14.583 "write_zeroes": true, 00:31:14.583 "zcopy": false, 00:31:14.583 "get_zone_info": false, 00:31:14.583 "zone_management": false, 00:31:14.583 "zone_append": false, 00:31:14.583 "compare": true, 00:31:14.583 "compare_and_write": true, 00:31:14.583 "abort": true, 00:31:14.583 "seek_hole": false, 00:31:14.583 "seek_data": false, 00:31:14.583 "copy": true, 00:31:14.583 "nvme_iov_md": false 00:31:14.583 }, 00:31:14.583 "memory_domains": [ 00:31:14.583 { 00:31:14.583 "dma_device_id": "system", 00:31:14.583 "dma_device_type": 1 00:31:14.583 } 00:31:14.583 ], 00:31:14.583 "driver_specific": { 00:31:14.583 "nvme": [ 00:31:14.583 { 00:31:14.583 "trid": { 00:31:14.583 "trtype": "TCP", 00:31:14.583 "adrfam": "IPv4", 00:31:14.583 "traddr": "10.0.0.2", 00:31:14.583 "trsvcid": "4420", 00:31:14.583 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:14.583 }, 00:31:14.583 "ctrlr_data": { 00:31:14.583 "cntlid": 1, 00:31:14.583 "vendor_id": "0x8086", 00:31:14.583 "model_number": "SPDK bdev Controller", 00:31:14.583 "serial_number": "00000000000000000000", 00:31:14.583 "firmware_revision": "24.09", 00:31:14.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.583 "oacs": { 00:31:14.583 "security": 0, 00:31:14.583 "format": 0, 00:31:14.583 "firmware": 0, 00:31:14.583 "ns_manage": 0 00:31:14.583 }, 00:31:14.583 "multi_ctrlr": true, 00:31:14.583 "ana_reporting": false 00:31:14.583 }, 00:31:14.583 "vs": { 00:31:14.583 "nvme_version": "1.3" 00:31:14.583 }, 00:31:14.583 "ns_data": { 00:31:14.583 "id": 1, 00:31:14.583 "can_share": true 00:31:14.583 } 00:31:14.583 } 00:31:14.583 ], 00:31:14.583 "mp_policy": "active_passive" 00:31:14.583 } 00:31:14.583 } 00:31:14.583 ] 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.583 [2024-07-21 08:27:24.039140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.583 [2024-07-21 08:27:24.039232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392d10 (9): Bad file descriptor 00:31:14.583 [2024-07-21 08:27:24.171778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.583 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.583 [ 00:31:14.583 { 00:31:14.583 "name": "nvme0n1", 00:31:14.583 "aliases": [ 00:31:14.583 "8c431648-caab-4792-8a91-c6c7bc11b182" 00:31:14.583 ], 00:31:14.583 "product_name": "NVMe disk", 00:31:14.583 "block_size": 512, 00:31:14.583 "num_blocks": 2097152, 00:31:14.583 "uuid": "8c431648-caab-4792-8a91-c6c7bc11b182", 00:31:14.583 "assigned_rate_limits": { 00:31:14.583 "rw_ios_per_sec": 0, 00:31:14.583 "rw_mbytes_per_sec": 0, 00:31:14.583 "r_mbytes_per_sec": 0, 00:31:14.583 "w_mbytes_per_sec": 0 00:31:14.583 }, 00:31:14.583 "claimed": false, 00:31:14.583 "zoned": false, 00:31:14.583 "supported_io_types": { 00:31:14.583 "read": true, 00:31:14.583 "write": true, 00:31:14.583 "unmap": false, 00:31:14.583 "flush": true, 00:31:14.583 "reset": true, 00:31:14.583 "nvme_admin": true, 00:31:14.583 "nvme_io": true, 00:31:14.583 "nvme_io_md": false, 00:31:14.583 "write_zeroes": true, 00:31:14.583 "zcopy": false, 00:31:14.583 "get_zone_info": false, 00:31:14.583 "zone_management": false, 00:31:14.583 "zone_append": false, 00:31:14.583 "compare": true, 00:31:14.583 "compare_and_write": true, 00:31:14.583 "abort": true, 00:31:14.583 "seek_hole": false, 00:31:14.583 "seek_data": false, 00:31:14.583 "copy": true, 00:31:14.583 "nvme_iov_md": false 00:31:14.583 }, 00:31:14.583 "memory_domains": [ 00:31:14.583 { 00:31:14.583 "dma_device_id": "system", 00:31:14.583 "dma_device_type": 1 00:31:14.583 } 00:31:14.583 ], 00:31:14.583 "driver_specific": { 00:31:14.583 "nvme": [ 00:31:14.584 { 00:31:14.584 "trid": { 00:31:14.584 "trtype": "TCP", 00:31:14.584 "adrfam": "IPv4", 00:31:14.584 "traddr": "10.0.0.2", 00:31:14.584 "trsvcid": "4420", 00:31:14.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:14.584 }, 00:31:14.584 "ctrlr_data": { 00:31:14.584 "cntlid": 2, 00:31:14.584 "vendor_id": "0x8086", 00:31:14.584 "model_number": "SPDK bdev Controller", 00:31:14.584 "serial_number": "00000000000000000000", 00:31:14.584 "firmware_revision": "24.09", 00:31:14.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.584 "oacs": { 00:31:14.584 "security": 0, 00:31:14.584 "format": 0, 00:31:14.584 "firmware": 0, 00:31:14.584 "ns_manage": 0 00:31:14.584 }, 00:31:14.584 "multi_ctrlr": true, 00:31:14.584 "ana_reporting": false 00:31:14.584 }, 00:31:14.584 "vs": { 00:31:14.584 "nvme_version": "1.3" 00:31:14.584 }, 00:31:14.584 "ns_data": { 00:31:14.584 "id": 1, 00:31:14.584 "can_share": true 00:31:14.584 } 00:31:14.584 } 00:31:14.584 ], 00:31:14.584 "mp_policy": "active_passive" 00:31:14.584 } 00:31:14.584 } 00:31:14.584 ] 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jEoL6kd28v 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:14.584 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jEoL6kd28v 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 [2024-07-21 08:27:24.223777] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:14.843 [2024-07-21 08:27:24.223931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jEoL6kd28v 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 [2024-07-21 08:27:24.231797] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jEoL6kd28v 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 [2024-07-21 08:27:24.239829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:14.843 [2024-07-21 08:27:24.239884] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:14.843 nvme0n1 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 [ 00:31:14.843 { 00:31:14.843 "name": "nvme0n1", 00:31:14.843 "aliases": [ 00:31:14.843 "8c431648-caab-4792-8a91-c6c7bc11b182" 00:31:14.843 ], 00:31:14.843 "product_name": "NVMe disk", 00:31:14.843 "block_size": 512, 00:31:14.843 "num_blocks": 2097152, 00:31:14.843 "uuid": "8c431648-caab-4792-8a91-c6c7bc11b182", 00:31:14.843 "assigned_rate_limits": { 00:31:14.843 "rw_ios_per_sec": 0, 00:31:14.843 "rw_mbytes_per_sec": 0, 00:31:14.843 "r_mbytes_per_sec": 0, 00:31:14.843 "w_mbytes_per_sec": 0 00:31:14.843 }, 00:31:14.843 "claimed": false, 00:31:14.843 "zoned": false, 00:31:14.843 "supported_io_types": { 00:31:14.843 "read": true, 00:31:14.843 "write": true, 00:31:14.843 "unmap": false, 00:31:14.843 "flush": true, 00:31:14.843 "reset": true, 00:31:14.843 "nvme_admin": true, 00:31:14.843 "nvme_io": true, 00:31:14.843 "nvme_io_md": false, 00:31:14.843 "write_zeroes": true, 00:31:14.843 "zcopy": false, 00:31:14.843 "get_zone_info": false, 00:31:14.843 "zone_management": false, 00:31:14.843 "zone_append": false, 00:31:14.843 "compare": true, 00:31:14.843 "compare_and_write": true, 00:31:14.843 "abort": true, 00:31:14.843 "seek_hole": false, 00:31:14.843 "seek_data": false, 00:31:14.843 "copy": true, 00:31:14.843 "nvme_iov_md": false 00:31:14.843 }, 00:31:14.843 "memory_domains": [ 00:31:14.843 { 00:31:14.843 "dma_device_id": "system", 00:31:14.843 "dma_device_type": 1 00:31:14.843 } 00:31:14.843 ], 00:31:14.843 "driver_specific": { 00:31:14.843 "nvme": [ 00:31:14.843 { 00:31:14.843 "trid": { 00:31:14.843 "trtype": "TCP", 00:31:14.843 "adrfam": "IPv4", 00:31:14.843 "traddr": "10.0.0.2", 00:31:14.843 "trsvcid": "4421", 00:31:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:14.843 }, 00:31:14.843 "ctrlr_data": { 00:31:14.843 "cntlid": 3, 00:31:14.843 "vendor_id": "0x8086", 00:31:14.843 "model_number": "SPDK bdev Controller", 00:31:14.843 "serial_number": "00000000000000000000", 00:31:14.843 "firmware_revision": "24.09", 00:31:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.843 "oacs": { 00:31:14.843 "security": 0, 00:31:14.843 "format": 0, 00:31:14.843 "firmware": 0, 00:31:14.843 "ns_manage": 0 00:31:14.843 }, 00:31:14.843 "multi_ctrlr": true, 00:31:14.843 "ana_reporting": false 00:31:14.843 }, 00:31:14.843 "vs": { 00:31:14.843 "nvme_version": "1.3" 00:31:14.843 }, 00:31:14.843 "ns_data": { 00:31:14.843 "id": 1, 00:31:14.843 "can_share": true 00:31:14.843 } 00:31:14.843 } 00:31:14.843 ], 00:31:14.843 "mp_policy": "active_passive" 00:31:14.843 } 00:31:14.843 } 00:31:14.843 ] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.jEoL6kd28v 00:31:14.843 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.844 rmmod nvme_tcp 00:31:14.844 rmmod nvme_fabrics 00:31:14.844 rmmod nvme_keyring 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 19773 ']' 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 19773 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 19773 ']' 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 19773 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 19773 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 19773' 00:31:14.844 killing process with pid 19773 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 19773 00:31:14.844 [2024-07-21 08:27:24.446920] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:14.844 [2024-07-21 08:27:24.446960] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:14.844 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 19773 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.102 08:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.635 08:27:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:17.635 00:31:17.635 real 0m5.470s 00:31:17.635 user 0m2.091s 00:31:17.635 sys 0m1.768s 00:31:17.635 08:27:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.635 08:27:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:17.635 ************************************ 00:31:17.635 END TEST nvmf_async_init 00:31:17.635 ************************************ 00:31:17.635 08:27:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:17.635 08:27:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:17.635 08:27:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:17.635 08:27:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.635 08:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.635 ************************************ 00:31:17.635 START TEST dma 00:31:17.635 ************************************ 00:31:17.635 08:27:26 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:17.635 * Looking for test storage... 00:31:17.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.635 08:27:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.635 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.636 08:27:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.636 08:27:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.636 08:27:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.636 08:27:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:31:17.636 08:27:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.636 08:27:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.636 08:27:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:17.636 08:27:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:31:17.636 00:31:17.636 real 0m0.069s 00:31:17.636 user 0m0.036s 00:31:17.636 sys 0m0.038s 00:31:17.636 08:27:26 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.636 08:27:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:31:17.636 ************************************ 00:31:17.636 END TEST dma 00:31:17.636 ************************************ 00:31:17.636 08:27:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:17.636 08:27:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:17.636 08:27:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:17.636 08:27:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.636 08:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.636 ************************************ 00:31:17.636 START TEST nvmf_identify 00:31:17.636 ************************************ 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:17.636 * Looking for test storage... 00:31:17.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:17.636 08:27:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:19.535 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:19.535 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:19.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.535 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:19.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:19.536 08:27:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:19.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:31:19.536 00:31:19.536 --- 10.0.0.2 ping statistics --- 00:31:19.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.536 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:31:19.536 00:31:19.536 --- 10.0.0.1 ping statistics --- 00:31:19.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.536 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=21900 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 21900 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 21900 ']' 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.536 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.536 [2024-07-21 08:27:29.090671] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:19.536 [2024-07-21 08:27:29.090744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.536 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.536 [2024-07-21 08:27:29.156564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.794 [2024-07-21 08:27:29.244558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.794 [2024-07-21 08:27:29.244628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.794 [2024-07-21 08:27:29.244643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.794 [2024-07-21 08:27:29.244655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.794 [2024-07-21 08:27:29.244679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.794 [2024-07-21 08:27:29.244780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.794 [2024-07-21 08:27:29.244804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.794 [2024-07-21 08:27:29.244864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.794 [2024-07-21 08:27:29.244867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.794 [2024-07-21 08:27:29.370254] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.794 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 Malloc0 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 [2024-07-21 08:27:29.447562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.053 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.053 [ 00:31:20.053 { 00:31:20.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:20.053 "subtype": "Discovery", 00:31:20.053 "listen_addresses": [ 00:31:20.053 { 00:31:20.053 "trtype": "TCP", 00:31:20.053 "adrfam": "IPv4", 00:31:20.053 "traddr": "10.0.0.2", 00:31:20.053 "trsvcid": "4420" 00:31:20.053 } 00:31:20.053 ], 00:31:20.053 "allow_any_host": true, 00:31:20.053 "hosts": [] 00:31:20.053 }, 00:31:20.054 { 00:31:20.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.054 "subtype": "NVMe", 00:31:20.054 "listen_addresses": [ 00:31:20.054 { 00:31:20.054 "trtype": "TCP", 00:31:20.054 "adrfam": "IPv4", 00:31:20.054 "traddr": "10.0.0.2", 00:31:20.054 "trsvcid": "4420" 00:31:20.054 } 00:31:20.054 ], 00:31:20.054 "allow_any_host": true, 00:31:20.054 "hosts": [], 00:31:20.054 "serial_number": "SPDK00000000000001", 00:31:20.054 "model_number": "SPDK bdev Controller", 00:31:20.054 "max_namespaces": 32, 00:31:20.054 "min_cntlid": 1, 00:31:20.054 "max_cntlid": 65519, 00:31:20.054 "namespaces": [ 00:31:20.054 { 00:31:20.054 "nsid": 1, 00:31:20.054 "bdev_name": "Malloc0", 00:31:20.054 "name": "Malloc0", 00:31:20.054 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:20.054 "eui64": "ABCDEF0123456789", 00:31:20.054 "uuid": "5e0096a1-6c60-4b0b-b179-a34911c83819" 00:31:20.054 } 00:31:20.054 ] 00:31:20.054 } 00:31:20.054 ] 00:31:20.054 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.054 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:20.054 [2024-07-21 08:27:29.487751] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:20.054 [2024-07-21 08:27:29.487794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21936 ] 00:31:20.054 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.054 [2024-07-21 08:27:29.521074] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:20.054 [2024-07-21 08:27:29.521145] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:20.054 [2024-07-21 08:27:29.521155] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:20.054 [2024-07-21 08:27:29.521175] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:20.054 [2024-07-21 08:27:29.521186] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:20.054 [2024-07-21 08:27:29.524676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:20.054 [2024-07-21 08:27:29.524729] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x913ae0 0 00:31:20.054 [2024-07-21 08:27:29.532641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:20.054 [2024-07-21 08:27:29.532662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:20.054 [2024-07-21 08:27:29.532670] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:20.054 [2024-07-21 08:27:29.532676] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:20.054 [2024-07-21 08:27:29.532741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.532754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.532762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.532780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:20.054 [2024-07-21 08:27:29.532806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.540629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.540647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.540655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.540662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.540677] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:20.054 [2024-07-21 08:27:29.540704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:20.054 [2024-07-21 08:27:29.540713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:20.054 [2024-07-21 08:27:29.540736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.540744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.540751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.540762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.540786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.540937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.540953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.540960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.540967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.540976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:20.054 [2024-07-21 08:27:29.540989] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:20.054 [2024-07-21 08:27:29.541002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.541032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.541054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.541141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.541156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.541164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.541180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:20.054 [2024-07-21 08:27:29.541194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.541232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.541253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.541345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.541358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.541365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.541381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.541424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.541445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.541537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.541552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.541559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.541575] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:20.054 [2024-07-21 08:27:29.541583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541707] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:20.054 [2024-07-21 08:27:29.541717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.541761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.541782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.541907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.541922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.541930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.541945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:20.054 [2024-07-21 08:27:29.541962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.541978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.541988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.542009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.542099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.542111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.542119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.542126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.542133] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:20.054 [2024-07-21 08:27:29.542142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.542155] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:20.054 [2024-07-21 08:27:29.542175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.542191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.542199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.542209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.542230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.542366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.054 [2024-07-21 08:27:29.542379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.054 [2024-07-21 08:27:29.542387] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.542393] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x913ae0): datao=0, datal=4096, cccid=0 00:31:20.054 [2024-07-21 08:27:29.542401] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96a240) on tqpair(0x913ae0): expected_datao=0, payload_size=4096 00:31:20.054 [2024-07-21 08:27:29.542413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.542431] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.542441] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.582740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.582759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.582768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.582775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.582787] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:20.054 [2024-07-21 08:27:29.582801] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:20.054 [2024-07-21 08:27:29.582810] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:20.054 [2024-07-21 08:27:29.582819] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:20.054 [2024-07-21 08:27:29.582828] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:20.054 [2024-07-21 08:27:29.582836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.582850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.582863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.582871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.582877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.582889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:20.054 [2024-07-21 08:27:29.582911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.583006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.583019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.583027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.583046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.583069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.054 [2024-07-21 08:27:29.583079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.583101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.054 [2024-07-21 08:27:29.583111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.583133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.054 [2024-07-21 08:27:29.583147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.583169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.054 [2024-07-21 08:27:29.583178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.583212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:20.054 [2024-07-21 08:27:29.583226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x913ae0) 00:31:20.054 [2024-07-21 08:27:29.583243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.054 [2024-07-21 08:27:29.583265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a240, cid 0, qid 0 00:31:20.054 [2024-07-21 08:27:29.583291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a3c0, cid 1, qid 0 00:31:20.054 [2024-07-21 08:27:29.583299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a540, cid 2, qid 0 00:31:20.054 [2024-07-21 08:27:29.583307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.054 [2024-07-21 08:27:29.583314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a840, cid 4, qid 0 00:31:20.054 [2024-07-21 08:27:29.583453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.054 [2024-07-21 08:27:29.583466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.054 [2024-07-21 08:27:29.583474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a840) on tqpair=0x913ae0 00:31:20.054 [2024-07-21 08:27:29.583490] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:20.054 [2024-07-21 08:27:29.583499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:20.054 [2024-07-21 08:27:29.583516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.054 [2024-07-21 08:27:29.583526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x913ae0) 00:31:20.055 [2024-07-21 08:27:29.583536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.055 [2024-07-21 08:27:29.583557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a840, cid 4, qid 0 00:31:20.055 [2024-07-21 08:27:29.583688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.055 [2024-07-21 08:27:29.583704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.055 [2024-07-21 08:27:29.583712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583718] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x913ae0): datao=0, datal=4096, cccid=4 00:31:20.055 [2024-07-21 08:27:29.583726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96a840) on tqpair(0x913ae0): expected_datao=0, payload_size=4096 00:31:20.055 [2024-07-21 08:27:29.583734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583744] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583752] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.055 [2024-07-21 08:27:29.583778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.055 [2024-07-21 08:27:29.583786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a840) on tqpair=0x913ae0 00:31:20.055 [2024-07-21 08:27:29.583811] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:20.055 [2024-07-21 08:27:29.583847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x913ae0) 00:31:20.055 [2024-07-21 08:27:29.583869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.055 [2024-07-21 08:27:29.583880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.583894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x913ae0) 00:31:20.055 [2024-07-21 08:27:29.583902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.055 [2024-07-21 08:27:29.583928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a840, cid 4, qid 0 00:31:20.055 [2024-07-21 08:27:29.583941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a9c0, cid 5, qid 0 00:31:20.055 [2024-07-21 08:27:29.584072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.055 [2024-07-21 08:27:29.584087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.055 [2024-07-21 08:27:29.584095] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.584101] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x913ae0): datao=0, datal=1024, cccid=4 00:31:20.055 [2024-07-21 08:27:29.584109] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96a840) on tqpair(0x913ae0): expected_datao=0, payload_size=1024 00:31:20.055 [2024-07-21 08:27:29.584116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.584126] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.584133] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.584142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.055 [2024-07-21 08:27:29.584151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.055 [2024-07-21 08:27:29.584158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.584165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a9c0) on tqpair=0x913ae0 00:31:20.055 [2024-07-21 08:27:29.626625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.055 [2024-07-21 08:27:29.626645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.055 [2024-07-21 08:27:29.626653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a840) on tqpair=0x913ae0 00:31:20.055 [2024-07-21 08:27:29.626677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x913ae0) 00:31:20.055 [2024-07-21 08:27:29.626698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.055 [2024-07-21 08:27:29.626745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a840, cid 4, qid 0 00:31:20.055 [2024-07-21 08:27:29.626897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.055 [2024-07-21 08:27:29.626910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.055 [2024-07-21 08:27:29.626918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x913ae0): datao=0, datal=3072, cccid=4 00:31:20.055 [2024-07-21 08:27:29.626937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96a840) on tqpair(0x913ae0): expected_datao=0, payload_size=3072 00:31:20.055 [2024-07-21 08:27:29.626945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626955] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626963] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.055 [2024-07-21 08:27:29.626985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.055 [2024-07-21 08:27:29.626992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.626999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a840) on tqpair=0x913ae0 00:31:20.055 [2024-07-21 08:27:29.627014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.627023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x913ae0) 00:31:20.055 [2024-07-21 08:27:29.627034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.055 [2024-07-21 08:27:29.627063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a840, cid 4, qid 0 00:31:20.055 [2024-07-21 08:27:29.627171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.055 [2024-07-21 08:27:29.627184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.055 [2024-07-21 08:27:29.627192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.627199] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x913ae0): datao=0, datal=8, cccid=4 00:31:20.055 [2024-07-21 08:27:29.627206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96a840) on tqpair(0x913ae0): expected_datao=0, payload_size=8 00:31:20.055 [2024-07-21 08:27:29.627215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.627225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.627233] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.667720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.055 [2024-07-21 08:27:29.667739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.055 [2024-07-21 08:27:29.667747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.055 [2024-07-21 08:27:29.667754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a840) on tqpair=0x913ae0 00:31:20.055 ===================================================== 00:31:20.055 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:20.055 ===================================================== 00:31:20.055 Controller Capabilities/Features 00:31:20.055 ================================ 00:31:20.055 Vendor ID: 0000 00:31:20.055 Subsystem Vendor ID: 0000 00:31:20.055 Serial Number: .................... 00:31:20.055 Model Number: ........................................ 00:31:20.055 Firmware Version: 24.09 00:31:20.055 Recommended Arb Burst: 0 00:31:20.055 IEEE OUI Identifier: 00 00 00 00:31:20.055 Multi-path I/O 00:31:20.055 May have multiple subsystem ports: No 00:31:20.055 May have multiple controllers: No 00:31:20.055 Associated with SR-IOV VF: No 00:31:20.055 Max Data Transfer Size: 131072 00:31:20.055 Max Number of Namespaces: 0 00:31:20.055 Max Number of I/O Queues: 1024 00:31:20.055 NVMe Specification Version (VS): 1.3 00:31:20.055 NVMe Specification Version (Identify): 1.3 00:31:20.055 Maximum Queue Entries: 128 00:31:20.055 Contiguous Queues Required: Yes 00:31:20.055 Arbitration Mechanisms Supported 00:31:20.055 Weighted Round Robin: Not Supported 00:31:20.055 Vendor Specific: Not Supported 00:31:20.055 Reset Timeout: 15000 ms 00:31:20.055 Doorbell Stride: 4 bytes 00:31:20.055 NVM Subsystem Reset: Not Supported 00:31:20.055 Command Sets Supported 00:31:20.055 NVM Command Set: Supported 00:31:20.055 Boot Partition: Not Supported 00:31:20.055 Memory Page Size Minimum: 4096 bytes 00:31:20.055 Memory Page Size Maximum: 4096 bytes 00:31:20.055 Persistent Memory Region: Not Supported 00:31:20.055 Optional Asynchronous Events Supported 00:31:20.055 Namespace Attribute Notices: Not Supported 00:31:20.055 Firmware Activation Notices: Not Supported 00:31:20.055 ANA Change Notices: Not Supported 00:31:20.055 PLE Aggregate Log Change Notices: Not Supported 00:31:20.055 LBA Status Info Alert Notices: Not Supported 00:31:20.055 EGE Aggregate Log Change Notices: Not Supported 00:31:20.055 Normal NVM Subsystem Shutdown event: Not Supported 00:31:20.055 Zone Descriptor Change Notices: Not Supported 00:31:20.055 Discovery Log Change Notices: Supported 00:31:20.055 Controller Attributes 00:31:20.055 128-bit Host Identifier: Not Supported 00:31:20.055 Non-Operational Permissive Mode: Not Supported 00:31:20.055 NVM Sets: Not Supported 00:31:20.055 Read Recovery Levels: Not Supported 00:31:20.055 Endurance Groups: Not Supported 00:31:20.055 Predictable Latency Mode: Not Supported 00:31:20.055 Traffic Based Keep ALive: Not Supported 00:31:20.055 Namespace Granularity: Not Supported 00:31:20.055 SQ Associations: Not Supported 00:31:20.055 UUID List: Not Supported 00:31:20.055 Multi-Domain Subsystem: Not Supported 00:31:20.055 Fixed Capacity Management: Not Supported 00:31:20.055 Variable Capacity Management: Not Supported 00:31:20.055 Delete Endurance Group: Not Supported 00:31:20.055 Delete NVM Set: Not Supported 00:31:20.055 Extended LBA Formats Supported: Not Supported 00:31:20.055 Flexible Data Placement Supported: Not Supported 00:31:20.055 00:31:20.055 Controller Memory Buffer Support 00:31:20.055 ================================ 00:31:20.055 Supported: No 00:31:20.055 00:31:20.055 Persistent Memory Region Support 00:31:20.055 ================================ 00:31:20.055 Supported: No 00:31:20.055 00:31:20.055 Admin Command Set Attributes 00:31:20.055 ============================ 00:31:20.055 Security Send/Receive: Not Supported 00:31:20.055 Format NVM: Not Supported 00:31:20.055 Firmware Activate/Download: Not Supported 00:31:20.055 Namespace Management: Not Supported 00:31:20.055 Device Self-Test: Not Supported 00:31:20.055 Directives: Not Supported 00:31:20.055 NVMe-MI: Not Supported 00:31:20.055 Virtualization Management: Not Supported 00:31:20.055 Doorbell Buffer Config: Not Supported 00:31:20.055 Get LBA Status Capability: Not Supported 00:31:20.055 Command & Feature Lockdown Capability: Not Supported 00:31:20.055 Abort Command Limit: 1 00:31:20.055 Async Event Request Limit: 4 00:31:20.055 Number of Firmware Slots: N/A 00:31:20.055 Firmware Slot 1 Read-Only: N/A 00:31:20.055 Firmware Activation Without Reset: N/A 00:31:20.055 Multiple Update Detection Support: N/A 00:31:20.055 Firmware Update Granularity: No Information Provided 00:31:20.055 Per-Namespace SMART Log: No 00:31:20.055 Asymmetric Namespace Access Log Page: Not Supported 00:31:20.055 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:20.055 Command Effects Log Page: Not Supported 00:31:20.055 Get Log Page Extended Data: Supported 00:31:20.055 Telemetry Log Pages: Not Supported 00:31:20.055 Persistent Event Log Pages: Not Supported 00:31:20.055 Supported Log Pages Log Page: May Support 00:31:20.055 Commands Supported & Effects Log Page: Not Supported 00:31:20.055 Feature Identifiers & Effects Log Page:May Support 00:31:20.055 NVMe-MI Commands & Effects Log Page: May Support 00:31:20.055 Data Area 4 for Telemetry Log: Not Supported 00:31:20.055 Error Log Page Entries Supported: 128 00:31:20.055 Keep Alive: Not Supported 00:31:20.055 00:31:20.055 NVM Command Set Attributes 00:31:20.055 ========================== 00:31:20.055 Submission Queue Entry Size 00:31:20.055 Max: 1 00:31:20.055 Min: 1 00:31:20.055 Completion Queue Entry Size 00:31:20.055 Max: 1 00:31:20.055 Min: 1 00:31:20.055 Number of Namespaces: 0 00:31:20.055 Compare Command: Not Supported 00:31:20.055 Write Uncorrectable Command: Not Supported 00:31:20.055 Dataset Management Command: Not Supported 00:31:20.055 Write Zeroes Command: Not Supported 00:31:20.055 Set Features Save Field: Not Supported 00:31:20.055 Reservations: Not Supported 00:31:20.055 Timestamp: Not Supported 00:31:20.055 Copy: Not Supported 00:31:20.055 Volatile Write Cache: Not Present 00:31:20.055 Atomic Write Unit (Normal): 1 00:31:20.055 Atomic Write Unit (PFail): 1 00:31:20.055 Atomic Compare & Write Unit: 1 00:31:20.055 Fused Compare & Write: Supported 00:31:20.055 Scatter-Gather List 00:31:20.055 SGL Command Set: Supported 00:31:20.055 SGL Keyed: Supported 00:31:20.055 SGL Bit Bucket Descriptor: Not Supported 00:31:20.055 SGL Metadata Pointer: Not Supported 00:31:20.055 Oversized SGL: Not Supported 00:31:20.055 SGL Metadata Address: Not Supported 00:31:20.055 SGL Offset: Supported 00:31:20.055 Transport SGL Data Block: Not Supported 00:31:20.055 Replay Protected Memory Block: Not Supported 00:31:20.055 00:31:20.055 Firmware Slot Information 00:31:20.055 ========================= 00:31:20.055 Active slot: 0 00:31:20.055 00:31:20.055 00:31:20.055 Error Log 00:31:20.055 ========= 00:31:20.055 00:31:20.055 Active Namespaces 00:31:20.055 ================= 00:31:20.055 Discovery Log Page 00:31:20.055 ================== 00:31:20.055 Generation Counter: 2 00:31:20.055 Number of Records: 2 00:31:20.055 Record Format: 0 00:31:20.055 00:31:20.055 Discovery Log Entry 0 00:31:20.055 ---------------------- 00:31:20.055 Transport Type: 3 (TCP) 00:31:20.055 Address Family: 1 (IPv4) 00:31:20.055 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:20.055 Entry Flags: 00:31:20.055 Duplicate Returned Information: 1 00:31:20.055 Explicit Persistent Connection Support for Discovery: 1 00:31:20.055 Transport Requirements: 00:31:20.055 Secure Channel: Not Required 00:31:20.055 Port ID: 0 (0x0000) 00:31:20.055 Controller ID: 65535 (0xffff) 00:31:20.055 Admin Max SQ Size: 128 00:31:20.055 Transport Service Identifier: 4420 00:31:20.055 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:20.055 Transport Address: 10.0.0.2 00:31:20.055 Discovery Log Entry 1 00:31:20.055 ---------------------- 00:31:20.055 Transport Type: 3 (TCP) 00:31:20.055 Address Family: 1 (IPv4) 00:31:20.055 Subsystem Type: 2 (NVM Subsystem) 00:31:20.055 Entry Flags: 00:31:20.055 Duplicate Returned Information: 0 00:31:20.055 Explicit Persistent Connection Support for Discovery: 0 00:31:20.055 Transport Requirements: 00:31:20.055 Secure Channel: Not Required 00:31:20.055 Port ID: 0 (0x0000) 00:31:20.055 Controller ID: 65535 (0xffff) 00:31:20.055 Admin Max SQ Size: 128 00:31:20.055 Transport Service Identifier: 4420 00:31:20.055 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:20.056 Transport Address: 10.0.0.2 [2024-07-21 08:27:29.667875] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:20.056 [2024-07-21 08:27:29.667897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a240) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.667909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.056 [2024-07-21 08:27:29.667919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a3c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.667927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.056 [2024-07-21 08:27:29.667935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a540) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.667942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.056 [2024-07-21 08:27:29.667951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.667958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.056 [2024-07-21 08:27:29.667976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.667988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.667996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.668007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.668046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.668205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.668219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.668226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.668245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.668270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.668296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.668396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.668409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.668416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.668432] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:20.056 [2024-07-21 08:27:29.668441] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:20.056 [2024-07-21 08:27:29.668456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.668482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.668502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.668594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.668609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.668624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.668649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.668676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.668697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.668779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.668792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.668800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.668828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.668855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.668874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.668961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.668973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.668981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.668988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.669137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.669152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.669159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.669319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.669334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.669342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.669493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.669506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.669514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.669696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.669712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.669720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.669880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.669892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.669900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.669923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.669939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.669949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.669969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.670056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.670071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.670078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.670102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.670128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.670148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.670236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.670249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.670256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.670279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.670310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.670330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.670415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.670427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.670435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.670458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.670474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.670484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.670504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.670593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.670608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.674627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.674637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.674656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.674681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.674688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x913ae0) 00:31:20.056 [2024-07-21 08:27:29.674699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.056 [2024-07-21 08:27:29.674721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96a6c0, cid 3, qid 0 00:31:20.056 [2024-07-21 08:27:29.674846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.056 [2024-07-21 08:27:29.674862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.056 [2024-07-21 08:27:29.674869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.056 [2024-07-21 08:27:29.674876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x96a6c0) on tqpair=0x913ae0 00:31:20.056 [2024-07-21 08:27:29.674890] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:31:20.320 00:31:20.320 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:20.320 [2024-07-21 08:27:29.709761] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:20.320 [2024-07-21 08:27:29.709807] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid22003 ] 00:31:20.320 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.320 [2024-07-21 08:27:29.741398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:20.320 [2024-07-21 08:27:29.741451] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:20.320 [2024-07-21 08:27:29.741461] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:20.320 [2024-07-21 08:27:29.741478] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:20.320 [2024-07-21 08:27:29.741487] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:20.320 [2024-07-21 08:27:29.744651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:20.320 [2024-07-21 08:27:29.744700] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x131bae0 0 00:31:20.320 [2024-07-21 08:27:29.751624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:20.320 [2024-07-21 08:27:29.751643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:20.320 [2024-07-21 08:27:29.751651] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:20.320 [2024-07-21 08:27:29.751657] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:20.320 [2024-07-21 08:27:29.751712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.751723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.751730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.320 [2024-07-21 08:27:29.751744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:20.320 [2024-07-21 08:27:29.751770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.320 [2024-07-21 08:27:29.758640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.320 [2024-07-21 08:27:29.758658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.320 [2024-07-21 08:27:29.758666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.320 [2024-07-21 08:27:29.758688] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:20.320 [2024-07-21 08:27:29.758699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:20.320 [2024-07-21 08:27:29.758708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:20.320 [2024-07-21 08:27:29.758726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.320 [2024-07-21 08:27:29.758753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.320 [2024-07-21 08:27:29.758777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.320 [2024-07-21 08:27:29.758918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.320 [2024-07-21 08:27:29.758931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.320 [2024-07-21 08:27:29.758938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.320 [2024-07-21 08:27:29.758953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:20.320 [2024-07-21 08:27:29.758966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:20.320 [2024-07-21 08:27:29.758978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.758992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.320 [2024-07-21 08:27:29.759007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.320 [2024-07-21 08:27:29.759029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.320 [2024-07-21 08:27:29.759122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.320 [2024-07-21 08:27:29.759137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.320 [2024-07-21 08:27:29.759144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.759151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.320 [2024-07-21 08:27:29.759160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:20.320 [2024-07-21 08:27:29.759174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:20.320 [2024-07-21 08:27:29.759186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.759194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.320 [2024-07-21 08:27:29.759200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.759210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.759231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.759322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.759337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.759344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.759359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:20.321 [2024-07-21 08:27:29.759376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.759402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.759423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.759524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.759539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.759546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.759561] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:20.321 [2024-07-21 08:27:29.759569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:20.321 [2024-07-21 08:27:29.759582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:20.321 [2024-07-21 08:27:29.759692] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:20.321 [2024-07-21 08:27:29.759701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:20.321 [2024-07-21 08:27:29.759713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.759742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.759763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.759852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.759864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.759872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.759887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:20.321 [2024-07-21 08:27:29.759903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.759919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.759929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.759958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.760051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.760066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.760074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.760088] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:20.321 [2024-07-21 08:27:29.760096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760109] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:20.321 [2024-07-21 08:27:29.760127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.760180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.760328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.321 [2024-07-21 08:27:29.760340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.321 [2024-07-21 08:27:29.760347] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760354] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=4096, cccid=0 00:31:20.321 [2024-07-21 08:27:29.760362] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372240) on tqpair(0x131bae0): expected_datao=0, payload_size=4096 00:31:20.321 [2024-07-21 08:27:29.760369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760379] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.760422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.760429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.760446] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:20.321 [2024-07-21 08:27:29.760474] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:20.321 [2024-07-21 08:27:29.760482] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:20.321 [2024-07-21 08:27:29.760488] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:20.321 [2024-07-21 08:27:29.760496] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:20.321 [2024-07-21 08:27:29.760503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:20.321 [2024-07-21 08:27:29.760573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.760734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.760749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.760757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.321 [2024-07-21 08:27:29.760774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.321 [2024-07-21 08:27:29.760807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.321 [2024-07-21 08:27:29.760839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.321 [2024-07-21 08:27:29.760870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.321 [2024-07-21 08:27:29.760905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:20.321 [2024-07-21 08:27:29.760952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.321 [2024-07-21 08:27:29.760959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.321 [2024-07-21 08:27:29.760969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.321 [2024-07-21 08:27:29.760990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372240, cid 0, qid 0 00:31:20.321 [2024-07-21 08:27:29.761016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13723c0, cid 1, qid 0 00:31:20.321 [2024-07-21 08:27:29.761024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372540, cid 2, qid 0 00:31:20.321 [2024-07-21 08:27:29.761032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.321 [2024-07-21 08:27:29.761039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.321 [2024-07-21 08:27:29.761179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.321 [2024-07-21 08:27:29.761194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.321 [2024-07-21 08:27:29.761201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.761216] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:20.322 [2024-07-21 08:27:29.761225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.761285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:20.322 [2024-07-21 08:27:29.761321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.322 [2024-07-21 08:27:29.761466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.761481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.761489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.761563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.761626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-07-21 08:27:29.761649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.322 [2024-07-21 08:27:29.761762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.322 [2024-07-21 08:27:29.761777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.322 [2024-07-21 08:27:29.761784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761790] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=4096, cccid=4 00:31:20.322 [2024-07-21 08:27:29.761798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372840) on tqpair(0x131bae0): expected_datao=0, payload_size=4096 00:31:20.322 [2024-07-21 08:27:29.761805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761822] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761831] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.761874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.761881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.761902] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:20.322 [2024-07-21 08:27:29.761918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.761957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.761964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.761990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-07-21 08:27:29.762012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.322 [2024-07-21 08:27:29.762145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.322 [2024-07-21 08:27:29.762160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.322 [2024-07-21 08:27:29.762167] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762173] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=4096, cccid=4 00:31:20.322 [2024-07-21 08:27:29.762181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372840) on tqpair(0x131bae0): expected_datao=0, payload_size=4096 00:31:20.322 [2024-07-21 08:27:29.762188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.762286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.762293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.762319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.762359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.762374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.762391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-07-21 08:27:29.762411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.322 [2024-07-21 08:27:29.762530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.322 [2024-07-21 08:27:29.762545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.322 [2024-07-21 08:27:29.762552] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762558] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=4096, cccid=4 00:31:20.322 [2024-07-21 08:27:29.762566] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372840) on tqpair(0x131bae0): expected_datao=0, payload_size=4096 00:31:20.322 [2024-07-21 08:27:29.762573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762590] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.762599] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.766624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.766640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.766647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.766654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.766677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766758] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:20.322 [2024-07-21 08:27:29.766766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:20.322 [2024-07-21 08:27:29.766774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:20.322 [2024-07-21 08:27:29.766793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.766802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.766812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-07-21 08:27:29.766824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.766834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.766841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.766850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.322 [2024-07-21 08:27:29.766876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.322 [2024-07-21 08:27:29.766888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13729c0, cid 5, qid 0 00:31:20.322 [2024-07-21 08:27:29.767004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.767019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.767027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.767034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.767045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.767054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.767061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.767068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13729c0) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.767083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.767093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131bae0) 00:31:20.322 [2024-07-21 08:27:29.767103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.322 [2024-07-21 08:27:29.767124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13729c0, cid 5, qid 0 00:31:20.322 [2024-07-21 08:27:29.767265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.322 [2024-07-21 08:27:29.767280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.322 [2024-07-21 08:27:29.767287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.767294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13729c0) on tqpair=0x131bae0 00:31:20.322 [2024-07-21 08:27:29.767310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.322 [2024-07-21 08:27:29.767319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13729c0, cid 5, qid 0 00:31:20.323 [2024-07-21 08:27:29.767470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.767483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.767490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13729c0) on tqpair=0x131bae0 00:31:20.323 [2024-07-21 08:27:29.767513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13729c0, cid 5, qid 0 00:31:20.323 [2024-07-21 08:27:29.767674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.767688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.767695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13729c0) on tqpair=0x131bae0 00:31:20.323 [2024-07-21 08:27:29.767729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.767823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x131bae0) 00:31:20.323 [2024-07-21 08:27:29.767832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.323 [2024-07-21 08:27:29.767854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13729c0, cid 5, qid 0 00:31:20.323 [2024-07-21 08:27:29.767866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372840, cid 4, qid 0 00:31:20.323 [2024-07-21 08:27:29.767874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372b40, cid 6, qid 0 00:31:20.323 [2024-07-21 08:27:29.767881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372cc0, cid 7, qid 0 00:31:20.323 [2024-07-21 08:27:29.768093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.323 [2024-07-21 08:27:29.768109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.323 [2024-07-21 08:27:29.768116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768122] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=8192, cccid=5 00:31:20.323 [2024-07-21 08:27:29.768130] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13729c0) on tqpair(0x131bae0): expected_datao=0, payload_size=8192 00:31:20.323 [2024-07-21 08:27:29.768137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768147] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768155] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.323 [2024-07-21 08:27:29.768172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.323 [2024-07-21 08:27:29.768179] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768185] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=512, cccid=4 00:31:20.323 [2024-07-21 08:27:29.768193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372840) on tqpair(0x131bae0): expected_datao=0, payload_size=512 00:31:20.323 [2024-07-21 08:27:29.768200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768209] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.323 [2024-07-21 08:27:29.768237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.323 [2024-07-21 08:27:29.768245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=512, cccid=6 00:31:20.323 [2024-07-21 08:27:29.768258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372b40) on tqpair(0x131bae0): expected_datao=0, payload_size=512 00:31:20.323 [2024-07-21 08:27:29.768265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768274] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768281] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:20.323 [2024-07-21 08:27:29.768299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:20.323 [2024-07-21 08:27:29.768305] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131bae0): datao=0, datal=4096, cccid=7 00:31:20.323 [2024-07-21 08:27:29.768319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1372cc0) on tqpair(0x131bae0): expected_datao=0, payload_size=4096 00:31:20.323 [2024-07-21 08:27:29.768326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768343] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.768364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.768371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13729c0) on tqpair=0x131bae0 00:31:20.323 [2024-07-21 08:27:29.768396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.768407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.768414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372840) on tqpair=0x131bae0 00:31:20.323 [2024-07-21 08:27:29.768436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.768461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.768468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372b40) on tqpair=0x131bae0 00:31:20.323 [2024-07-21 08:27:29.768486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.323 [2024-07-21 08:27:29.768495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.323 [2024-07-21 08:27:29.768502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.323 [2024-07-21 08:27:29.768508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372cc0) on tqpair=0x131bae0 00:31:20.323 ===================================================== 00:31:20.323 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.323 ===================================================== 00:31:20.323 Controller Capabilities/Features 00:31:20.323 ================================ 00:31:20.323 Vendor ID: 8086 00:31:20.323 Subsystem Vendor ID: 8086 00:31:20.323 Serial Number: SPDK00000000000001 00:31:20.323 Model Number: SPDK bdev Controller 00:31:20.323 Firmware Version: 24.09 00:31:20.323 Recommended Arb Burst: 6 00:31:20.323 IEEE OUI Identifier: e4 d2 5c 00:31:20.323 Multi-path I/O 00:31:20.323 May have multiple subsystem ports: Yes 00:31:20.323 May have multiple controllers: Yes 00:31:20.323 Associated with SR-IOV VF: No 00:31:20.323 Max Data Transfer Size: 131072 00:31:20.323 Max Number of Namespaces: 32 00:31:20.323 Max Number of I/O Queues: 127 00:31:20.323 NVMe Specification Version (VS): 1.3 00:31:20.323 NVMe Specification Version (Identify): 1.3 00:31:20.323 Maximum Queue Entries: 128 00:31:20.323 Contiguous Queues Required: Yes 00:31:20.323 Arbitration Mechanisms Supported 00:31:20.323 Weighted Round Robin: Not Supported 00:31:20.323 Vendor Specific: Not Supported 00:31:20.323 Reset Timeout: 15000 ms 00:31:20.323 Doorbell Stride: 4 bytes 00:31:20.323 NVM Subsystem Reset: Not Supported 00:31:20.323 Command Sets Supported 00:31:20.323 NVM Command Set: Supported 00:31:20.323 Boot Partition: Not Supported 00:31:20.323 Memory Page Size Minimum: 4096 bytes 00:31:20.323 Memory Page Size Maximum: 4096 bytes 00:31:20.323 Persistent Memory Region: Not Supported 00:31:20.323 Optional Asynchronous Events Supported 00:31:20.323 Namespace Attribute Notices: Supported 00:31:20.323 Firmware Activation Notices: Not Supported 00:31:20.323 ANA Change Notices: Not Supported 00:31:20.323 PLE Aggregate Log Change Notices: Not Supported 00:31:20.323 LBA Status Info Alert Notices: Not Supported 00:31:20.323 EGE Aggregate Log Change Notices: Not Supported 00:31:20.323 Normal NVM Subsystem Shutdown event: Not Supported 00:31:20.323 Zone Descriptor Change Notices: Not Supported 00:31:20.323 Discovery Log Change Notices: Not Supported 00:31:20.323 Controller Attributes 00:31:20.323 128-bit Host Identifier: Supported 00:31:20.323 Non-Operational Permissive Mode: Not Supported 00:31:20.323 NVM Sets: Not Supported 00:31:20.323 Read Recovery Levels: Not Supported 00:31:20.323 Endurance Groups: Not Supported 00:31:20.323 Predictable Latency Mode: Not Supported 00:31:20.323 Traffic Based Keep ALive: Not Supported 00:31:20.323 Namespace Granularity: Not Supported 00:31:20.323 SQ Associations: Not Supported 00:31:20.323 UUID List: Not Supported 00:31:20.323 Multi-Domain Subsystem: Not Supported 00:31:20.323 Fixed Capacity Management: Not Supported 00:31:20.323 Variable Capacity Management: Not Supported 00:31:20.323 Delete Endurance Group: Not Supported 00:31:20.324 Delete NVM Set: Not Supported 00:31:20.324 Extended LBA Formats Supported: Not Supported 00:31:20.324 Flexible Data Placement Supported: Not Supported 00:31:20.324 00:31:20.324 Controller Memory Buffer Support 00:31:20.324 ================================ 00:31:20.324 Supported: No 00:31:20.324 00:31:20.324 Persistent Memory Region Support 00:31:20.324 ================================ 00:31:20.324 Supported: No 00:31:20.324 00:31:20.324 Admin Command Set Attributes 00:31:20.324 ============================ 00:31:20.324 Security Send/Receive: Not Supported 00:31:20.324 Format NVM: Not Supported 00:31:20.324 Firmware Activate/Download: Not Supported 00:31:20.324 Namespace Management: Not Supported 00:31:20.324 Device Self-Test: Not Supported 00:31:20.324 Directives: Not Supported 00:31:20.324 NVMe-MI: Not Supported 00:31:20.324 Virtualization Management: Not Supported 00:31:20.324 Doorbell Buffer Config: Not Supported 00:31:20.324 Get LBA Status Capability: Not Supported 00:31:20.324 Command & Feature Lockdown Capability: Not Supported 00:31:20.324 Abort Command Limit: 4 00:31:20.324 Async Event Request Limit: 4 00:31:20.324 Number of Firmware Slots: N/A 00:31:20.324 Firmware Slot 1 Read-Only: N/A 00:31:20.324 Firmware Activation Without Reset: N/A 00:31:20.324 Multiple Update Detection Support: N/A 00:31:20.324 Firmware Update Granularity: No Information Provided 00:31:20.324 Per-Namespace SMART Log: No 00:31:20.324 Asymmetric Namespace Access Log Page: Not Supported 00:31:20.324 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:20.324 Command Effects Log Page: Supported 00:31:20.324 Get Log Page Extended Data: Supported 00:31:20.324 Telemetry Log Pages: Not Supported 00:31:20.324 Persistent Event Log Pages: Not Supported 00:31:20.324 Supported Log Pages Log Page: May Support 00:31:20.324 Commands Supported & Effects Log Page: Not Supported 00:31:20.324 Feature Identifiers & Effects Log Page:May Support 00:31:20.324 NVMe-MI Commands & Effects Log Page: May Support 00:31:20.324 Data Area 4 for Telemetry Log: Not Supported 00:31:20.324 Error Log Page Entries Supported: 128 00:31:20.324 Keep Alive: Supported 00:31:20.324 Keep Alive Granularity: 10000 ms 00:31:20.324 00:31:20.324 NVM Command Set Attributes 00:31:20.324 ========================== 00:31:20.324 Submission Queue Entry Size 00:31:20.324 Max: 64 00:31:20.324 Min: 64 00:31:20.324 Completion Queue Entry Size 00:31:20.324 Max: 16 00:31:20.324 Min: 16 00:31:20.324 Number of Namespaces: 32 00:31:20.324 Compare Command: Supported 00:31:20.324 Write Uncorrectable Command: Not Supported 00:31:20.324 Dataset Management Command: Supported 00:31:20.324 Write Zeroes Command: Supported 00:31:20.324 Set Features Save Field: Not Supported 00:31:20.324 Reservations: Supported 00:31:20.324 Timestamp: Not Supported 00:31:20.324 Copy: Supported 00:31:20.324 Volatile Write Cache: Present 00:31:20.324 Atomic Write Unit (Normal): 1 00:31:20.324 Atomic Write Unit (PFail): 1 00:31:20.324 Atomic Compare & Write Unit: 1 00:31:20.324 Fused Compare & Write: Supported 00:31:20.324 Scatter-Gather List 00:31:20.324 SGL Command Set: Supported 00:31:20.324 SGL Keyed: Supported 00:31:20.324 SGL Bit Bucket Descriptor: Not Supported 00:31:20.324 SGL Metadata Pointer: Not Supported 00:31:20.324 Oversized SGL: Not Supported 00:31:20.324 SGL Metadata Address: Not Supported 00:31:20.324 SGL Offset: Supported 00:31:20.324 Transport SGL Data Block: Not Supported 00:31:20.324 Replay Protected Memory Block: Not Supported 00:31:20.324 00:31:20.324 Firmware Slot Information 00:31:20.324 ========================= 00:31:20.324 Active slot: 1 00:31:20.324 Slot 1 Firmware Revision: 24.09 00:31:20.324 00:31:20.324 00:31:20.324 Commands Supported and Effects 00:31:20.324 ============================== 00:31:20.324 Admin Commands 00:31:20.324 -------------- 00:31:20.324 Get Log Page (02h): Supported 00:31:20.324 Identify (06h): Supported 00:31:20.324 Abort (08h): Supported 00:31:20.324 Set Features (09h): Supported 00:31:20.324 Get Features (0Ah): Supported 00:31:20.324 Asynchronous Event Request (0Ch): Supported 00:31:20.324 Keep Alive (18h): Supported 00:31:20.324 I/O Commands 00:31:20.324 ------------ 00:31:20.324 Flush (00h): Supported LBA-Change 00:31:20.324 Write (01h): Supported LBA-Change 00:31:20.324 Read (02h): Supported 00:31:20.324 Compare (05h): Supported 00:31:20.324 Write Zeroes (08h): Supported LBA-Change 00:31:20.324 Dataset Management (09h): Supported LBA-Change 00:31:20.324 Copy (19h): Supported LBA-Change 00:31:20.324 00:31:20.324 Error Log 00:31:20.324 ========= 00:31:20.324 00:31:20.324 Arbitration 00:31:20.324 =========== 00:31:20.324 Arbitration Burst: 1 00:31:20.324 00:31:20.324 Power Management 00:31:20.324 ================ 00:31:20.324 Number of Power States: 1 00:31:20.324 Current Power State: Power State #0 00:31:20.324 Power State #0: 00:31:20.324 Max Power: 0.00 W 00:31:20.324 Non-Operational State: Operational 00:31:20.324 Entry Latency: Not Reported 00:31:20.324 Exit Latency: Not Reported 00:31:20.324 Relative Read Throughput: 0 00:31:20.324 Relative Read Latency: 0 00:31:20.324 Relative Write Throughput: 0 00:31:20.324 Relative Write Latency: 0 00:31:20.324 Idle Power: Not Reported 00:31:20.324 Active Power: Not Reported 00:31:20.324 Non-Operational Permissive Mode: Not Supported 00:31:20.324 00:31:20.324 Health Information 00:31:20.324 ================== 00:31:20.324 Critical Warnings: 00:31:20.324 Available Spare Space: OK 00:31:20.324 Temperature: OK 00:31:20.324 Device Reliability: OK 00:31:20.324 Read Only: No 00:31:20.324 Volatile Memory Backup: OK 00:31:20.324 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:20.324 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:20.324 Available Spare: 0% 00:31:20.324 Available Spare Threshold: 0% 00:31:20.324 Life Percentage Used:[2024-07-21 08:27:29.768683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.324 [2024-07-21 08:27:29.768696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x131bae0) 00:31:20.324 [2024-07-21 08:27:29.768707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.324 [2024-07-21 08:27:29.768729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1372cc0, cid 7, qid 0 00:31:20.324 [2024-07-21 08:27:29.768879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.324 [2024-07-21 08:27:29.768893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.324 [2024-07-21 08:27:29.768900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.324 [2024-07-21 08:27:29.768907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372cc0) on tqpair=0x131bae0 00:31:20.324 [2024-07-21 08:27:29.768967] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:20.324 [2024-07-21 08:27:29.768987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372240) on tqpair=0x131bae0 00:31:20.325 [2024-07-21 08:27:29.768998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.325 [2024-07-21 08:27:29.769007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13723c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.326 [2024-07-21 08:27:29.769022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1372540) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.326 [2024-07-21 08:27:29.769053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.326 [2024-07-21 08:27:29.769073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.769096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.769118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.769223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.769236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.769243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.769285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.769311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.769423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.769435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.769442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769457] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:20.326 [2024-07-21 08:27:29.769464] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:20.326 [2024-07-21 08:27:29.769479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.769505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.769528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.769623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.769637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.769644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.769693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.769713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.769818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.769830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.769838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.769860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.769876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.769886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.769906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.770004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.770018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.770026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.770049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.770075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.770096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.770198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.770213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.770221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.770244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.770270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.770291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.770396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.770409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.770417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.770439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.770455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.770465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.770485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.770598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.770610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.774629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.774638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.774656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.774681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.774688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131bae0) 00:31:20.326 [2024-07-21 08:27:29.774698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.326 [2024-07-21 08:27:29.774721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13726c0, cid 3, qid 0 00:31:20.326 [2024-07-21 08:27:29.774816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:20.326 [2024-07-21 08:27:29.774831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:20.326 [2024-07-21 08:27:29.774838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:20.326 [2024-07-21 08:27:29.774845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13726c0) on tqpair=0x131bae0 00:31:20.326 [2024-07-21 08:27:29.774858] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:31:20.326 0% 00:31:20.326 Data Units Read: 0 00:31:20.326 Data Units Written: 0 00:31:20.326 Host Read Commands: 0 00:31:20.326 Host Write Commands: 0 00:31:20.326 Controller Busy Time: 0 minutes 00:31:20.326 Power Cycles: 0 00:31:20.326 Power On Hours: 0 hours 00:31:20.326 Unsafe Shutdowns: 0 00:31:20.326 Unrecoverable Media Errors: 0 00:31:20.326 Lifetime Error Log Entries: 0 00:31:20.326 Warning Temperature Time: 0 minutes 00:31:20.326 Critical Temperature Time: 0 minutes 00:31:20.326 00:31:20.326 Number of Queues 00:31:20.326 ================ 00:31:20.326 Number of I/O Submission Queues: 127 00:31:20.326 Number of I/O Completion Queues: 127 00:31:20.326 00:31:20.326 Active Namespaces 00:31:20.326 ================= 00:31:20.326 Namespace ID:1 00:31:20.326 Error Recovery Timeout: Unlimited 00:31:20.326 Command Set Identifier: NVM (00h) 00:31:20.326 Deallocate: Supported 00:31:20.326 Deallocated/Unwritten Error: Not Supported 00:31:20.326 Deallocated Read Value: Unknown 00:31:20.326 Deallocate in Write Zeroes: Not Supported 00:31:20.326 Deallocated Guard Field: 0xFFFF 00:31:20.326 Flush: Supported 00:31:20.326 Reservation: Supported 00:31:20.326 Namespace Sharing Capabilities: Multiple Controllers 00:31:20.326 Size (in LBAs): 131072 (0GiB) 00:31:20.326 Capacity (in LBAs): 131072 (0GiB) 00:31:20.326 Utilization (in LBAs): 131072 (0GiB) 00:31:20.326 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:20.326 EUI64: ABCDEF0123456789 00:31:20.326 UUID: 5e0096a1-6c60-4b0b-b179-a34911c83819 00:31:20.326 Thin Provisioning: Not Supported 00:31:20.326 Per-NS Atomic Units: Yes 00:31:20.326 Atomic Boundary Size (Normal): 0 00:31:20.326 Atomic Boundary Size (PFail): 0 00:31:20.326 Atomic Boundary Offset: 0 00:31:20.326 Maximum Single Source Range Length: 65535 00:31:20.327 Maximum Copy Length: 65535 00:31:20.327 Maximum Source Range Count: 1 00:31:20.327 NGUID/EUI64 Never Reused: No 00:31:20.327 Namespace Write Protected: No 00:31:20.327 Number of LBA Formats: 1 00:31:20.327 Current LBA Format: LBA Format #00 00:31:20.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:20.327 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:20.327 rmmod nvme_tcp 00:31:20.327 rmmod nvme_fabrics 00:31:20.327 rmmod nvme_keyring 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 21900 ']' 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 21900 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 21900 ']' 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 21900 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 21900 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 21900' 00:31:20.327 killing process with pid 21900 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 21900 00:31:20.327 08:27:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 21900 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.587 08:27:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.120 08:27:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.120 00:31:23.120 real 0m5.331s 00:31:23.120 user 0m4.172s 00:31:23.120 sys 0m1.871s 00:31:23.120 08:27:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:23.120 08:27:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:23.120 ************************************ 00:31:23.120 END TEST nvmf_identify 00:31:23.120 ************************************ 00:31:23.120 08:27:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:23.120 08:27:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:23.120 08:27:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:23.120 08:27:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.120 08:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:23.120 ************************************ 00:31:23.120 START TEST nvmf_perf 00:31:23.120 ************************************ 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:23.120 * Looking for test storage... 00:31:23.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.120 08:27:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:23.121 08:27:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.121 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:23.121 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:23.121 08:27:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:23.121 08:27:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:25.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:25.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:25.024 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:25.024 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.024 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:25.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:31:25.025 00:31:25.025 --- 10.0.0.2 ping statistics --- 00:31:25.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.025 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:31:25.025 00:31:25.025 --- 10.0.0.1 ping statistics --- 00:31:25.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.025 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=23981 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 23981 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 23981 ']' 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:25.025 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.025 [2024-07-21 08:27:34.376374] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:31:25.025 [2024-07-21 08:27:34.376441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.025 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.025 [2024-07-21 08:27:34.441830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.025 [2024-07-21 08:27:34.542275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.025 [2024-07-21 08:27:34.542327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.025 [2024-07-21 08:27:34.542356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.025 [2024-07-21 08:27:34.542368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.025 [2024-07-21 08:27:34.542378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.025 [2024-07-21 08:27:34.542462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.025 [2024-07-21 08:27:34.542485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.025 [2024-07-21 08:27:34.542545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.025 [2024-07-21 08:27:34.542547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:25.283 08:27:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:28.595 08:27:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:28.595 08:27:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:28.595 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:31:28.595 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.852 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:28.852 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:31:28.853 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:28.853 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:28.853 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:29.110 [2024-07-21 08:27:38.617700] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.110 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.367 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:29.367 08:27:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:29.624 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:29.624 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:29.881 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.138 [2024-07-21 08:27:39.605311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.138 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.395 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:31:30.395 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:30.395 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:30.395 08:27:39 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:31.770 Initializing NVMe Controllers 00:31:31.770 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:31:31.770 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:31:31.770 Initialization complete. Launching workers. 00:31:31.770 ======================================================== 00:31:31.770 Latency(us) 00:31:31.770 Device Information : IOPS MiB/s Average min max 00:31:31.770 PCIE (0000:88:00.0) NSID 1 from core 0: 85349.38 333.40 374.55 32.50 4337.68 00:31:31.770 ======================================================== 00:31:31.770 Total : 85349.38 333.40 374.55 32.50 4337.68 00:31:31.770 00:31:31.770 08:27:41 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.770 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.154 Initializing NVMe Controllers 00:31:33.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:33.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:33.154 Initialization complete. Launching workers. 00:31:33.154 ======================================================== 00:31:33.154 Latency(us) 00:31:33.154 Device Information : IOPS MiB/s Average min max 00:31:33.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 12062.93 156.69 45929.66 00:31:33.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19715.37 6998.45 48851.56 00:31:33.154 ======================================================== 00:31:33.154 Total : 137.00 0.54 14911.65 156.69 48851.56 00:31:33.154 00:31:33.154 08:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.154 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.529 Initializing NVMe Controllers 00:31:34.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:34.529 Initialization complete. Launching workers. 00:31:34.529 ======================================================== 00:31:34.529 Latency(us) 00:31:34.529 Device Information : IOPS MiB/s Average min max 00:31:34.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8446.12 32.99 3789.37 607.03 8098.97 00:31:34.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.05 15.00 8378.14 5073.18 47692.20 00:31:34.529 ======================================================== 00:31:34.529 Total : 12286.17 47.99 5223.60 607.03 47692.20 00:31:34.529 00:31:34.529 08:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:34.529 08:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:34.529 08:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.529 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.058 Initializing NVMe Controllers 00:31:37.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.058 Controller IO queue size 128, less than required. 00:31:37.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.058 Controller IO queue size 128, less than required. 00:31:37.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:37.058 Initialization complete. Launching workers. 00:31:37.058 ======================================================== 00:31:37.058 Latency(us) 00:31:37.058 Device Information : IOPS MiB/s Average min max 00:31:37.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1672.98 418.25 78010.14 56227.47 109487.61 00:31:37.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.15 144.54 226976.14 100714.80 342319.47 00:31:37.058 ======================================================== 00:31:37.058 Total : 2251.13 562.78 116268.45 56227.47 342319.47 00:31:37.058 00:31:37.058 08:27:46 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:37.058 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.058 No valid NVMe controllers or AIO or URING devices found 00:31:37.058 Initializing NVMe Controllers 00:31:37.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.058 Controller IO queue size 128, less than required. 00:31:37.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:37.058 Controller IO queue size 128, less than required. 00:31:37.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.058 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:37.058 WARNING: Some requested NVMe devices were skipped 00:31:37.058 08:27:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:37.058 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.586 Initializing NVMe Controllers 00:31:39.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.586 Controller IO queue size 128, less than required. 00:31:39.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.586 Controller IO queue size 128, less than required. 00:31:39.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:39.586 Initialization complete. Launching workers. 00:31:39.586 00:31:39.586 ==================== 00:31:39.586 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:39.586 TCP transport: 00:31:39.586 polls: 8951 00:31:39.586 idle_polls: 5520 00:31:39.586 sock_completions: 3431 00:31:39.586 nvme_completions: 5843 00:31:39.586 submitted_requests: 8732 00:31:39.586 queued_requests: 1 00:31:39.586 00:31:39.586 ==================== 00:31:39.586 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:39.586 TCP transport: 00:31:39.586 polls: 11756 00:31:39.586 idle_polls: 8231 00:31:39.586 sock_completions: 3525 00:31:39.586 nvme_completions: 6207 00:31:39.586 submitted_requests: 9274 00:31:39.586 queued_requests: 1 00:31:39.586 ======================================================== 00:31:39.586 Latency(us) 00:31:39.586 Device Information : IOPS MiB/s Average min max 00:31:39.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1459.64 364.91 90114.22 55198.28 146132.81 00:31:39.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1550.58 387.65 83075.36 46237.80 127828.64 00:31:39.587 ======================================================== 00:31:39.587 Total : 3010.22 752.55 86488.46 46237.80 146132.81 00:31:39.587 00:31:39.587 08:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:39.587 08:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.844 08:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:39.844 08:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:39.844 08:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=96996b96-64af-4c44-b78d-f5299392b74f 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 96996b96-64af-4c44-b78d-f5299392b74f 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=96996b96-64af-4c44-b78d-f5299392b74f 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:43.129 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.385 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:43.385 { 00:31:43.385 "uuid": "96996b96-64af-4c44-b78d-f5299392b74f", 00:31:43.385 "name": "lvs_0", 00:31:43.385 "base_bdev": "Nvme0n1", 00:31:43.385 "total_data_clusters": 238234, 00:31:43.385 "free_clusters": 238234, 00:31:43.385 "block_size": 512, 00:31:43.385 "cluster_size": 4194304 00:31:43.385 } 00:31:43.385 ]' 00:31:43.385 08:27:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="96996b96-64af-4c44-b78d-f5299392b74f") .free_clusters' 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="96996b96-64af-4c44-b78d-f5299392b74f") .cluster_size' 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:31:43.641 952936 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:43.641 08:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 96996b96-64af-4c44-b78d-f5299392b74f lbd_0 20480 00:31:43.898 08:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=05f212e4-aebc-4cab-b9eb-38f74fc6ffc6 00:31:43.898 08:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 05f212e4-aebc-4cab-b9eb-38f74fc6ffc6 lvs_n_0 00:31:44.832 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=ed3145f5-518d-4221-872a-0b5ff79d41b0 00:31:44.832 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb ed3145f5-518d-4221-872a-0b5ff79d41b0 00:31:44.832 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ed3145f5-518d-4221-872a-0b5ff79d41b0 00:31:44.832 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:44.833 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:44.833 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:44.833 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:45.090 { 00:31:45.090 "uuid": "96996b96-64af-4c44-b78d-f5299392b74f", 00:31:45.090 "name": "lvs_0", 00:31:45.090 "base_bdev": "Nvme0n1", 00:31:45.090 "total_data_clusters": 238234, 00:31:45.090 "free_clusters": 233114, 00:31:45.090 "block_size": 512, 00:31:45.090 "cluster_size": 4194304 00:31:45.090 }, 00:31:45.090 { 00:31:45.090 "uuid": "ed3145f5-518d-4221-872a-0b5ff79d41b0", 00:31:45.090 "name": "lvs_n_0", 00:31:45.090 "base_bdev": "05f212e4-aebc-4cab-b9eb-38f74fc6ffc6", 00:31:45.090 "total_data_clusters": 5114, 00:31:45.090 "free_clusters": 5114, 00:31:45.090 "block_size": 512, 00:31:45.090 "cluster_size": 4194304 00:31:45.090 } 00:31:45.090 ]' 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ed3145f5-518d-4221-872a-0b5ff79d41b0") .free_clusters' 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ed3145f5-518d-4221-872a-0b5ff79d41b0") .cluster_size' 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:45.090 08:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:45.090 20456 00:31:45.091 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:45.091 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ed3145f5-518d-4221-872a-0b5ff79d41b0 lbd_nest_0 20456 00:31:45.348 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6c188851-c0ab-45e1-8fe3-cdf36d059112 00:31:45.348 08:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.604 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:45.604 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6c188851-c0ab-45e1-8fe3-cdf36d059112 00:31:45.863 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.173 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:46.173 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:46.173 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:46.173 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:46.173 08:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:46.173 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.385 Initializing NVMe Controllers 00:31:58.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:58.385 Initialization complete. Launching workers. 00:31:58.385 ======================================================== 00:31:58.385 Latency(us) 00:31:58.385 Device Information : IOPS MiB/s Average min max 00:31:58.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.69 0.02 21437.68 183.45 48719.85 00:31:58.385 ======================================================== 00:31:58.385 Total : 46.69 0.02 21437.68 183.45 48719.85 00:31:58.385 00:31:58.385 08:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:58.385 08:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:58.385 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.374 Initializing NVMe Controllers 00:32:08.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:08.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:08.374 Initialization complete. Launching workers. 00:32:08.374 ======================================================== 00:32:08.374 Latency(us) 00:32:08.374 Device Information : IOPS MiB/s Average min max 00:32:08.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 10.37 12048.21 4027.81 50881.43 00:32:08.374 ======================================================== 00:32:08.374 Total : 83.00 10.37 12048.21 4027.81 50881.43 00:32:08.374 00:32:08.374 08:28:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:08.374 08:28:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:08.374 08:28:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.374 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.351 Initializing NVMe Controllers 00:32:18.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.351 Initialization complete. Launching workers. 00:32:18.351 ======================================================== 00:32:18.351 Latency(us) 00:32:18.351 Device Information : IOPS MiB/s Average min max 00:32:18.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7336.77 3.58 4369.25 312.05 42957.14 00:32:18.351 ======================================================== 00:32:18.351 Total : 7336.77 3.58 4369.25 312.05 42957.14 00:32:18.351 00:32:18.352 08:28:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:18.352 08:28:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.352 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.343 Initializing NVMe Controllers 00:32:28.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.343 Initialization complete. Launching workers. 00:32:28.343 ======================================================== 00:32:28.343 Latency(us) 00:32:28.343 Device Information : IOPS MiB/s Average min max 00:32:28.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3636.60 454.57 8803.81 717.03 19400.45 00:32:28.343 ======================================================== 00:32:28.343 Total : 3636.60 454.57 8803.81 717.03 19400.45 00:32:28.343 00:32:28.343 08:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:28.343 08:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:28.343 08:28:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.343 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.338 Initializing NVMe Controllers 00:32:38.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.338 Controller IO queue size 128, less than required. 00:32:38.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:38.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:38.338 Initialization complete. Launching workers. 00:32:38.338 ======================================================== 00:32:38.338 Latency(us) 00:32:38.338 Device Information : IOPS MiB/s Average min max 00:32:38.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11887.00 5.80 10776.14 1727.12 23541.25 00:32:38.338 ======================================================== 00:32:38.338 Total : 11887.00 5.80 10776.14 1727.12 23541.25 00:32:38.338 00:32:38.338 08:28:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:38.338 08:28:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:38.338 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.315 Initializing NVMe Controllers 00:32:48.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.315 Controller IO queue size 128, less than required. 00:32:48.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:48.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:48.315 Initialization complete. Launching workers. 00:32:48.315 ======================================================== 00:32:48.315 Latency(us) 00:32:48.316 Device Information : IOPS MiB/s Average min max 00:32:48.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.20 148.77 108123.26 23438.39 219336.94 00:32:48.316 ======================================================== 00:32:48.316 Total : 1190.20 148.77 108123.26 23438.39 219336.94 00:32:48.316 00:32:48.316 08:28:57 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.316 08:28:57 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c188851-c0ab-45e1-8fe3-cdf36d059112 00:32:49.250 08:28:58 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:49.250 08:28:58 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05f212e4-aebc-4cab-b9eb-38f74fc6ffc6 00:32:49.507 08:28:59 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:49.765 rmmod nvme_tcp 00:32:49.765 rmmod nvme_fabrics 00:32:49.765 rmmod nvme_keyring 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 23981 ']' 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 23981 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 23981 ']' 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 23981 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23981 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23981' 00:32:49.765 killing process with pid 23981 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 23981 00:32:49.765 08:28:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 23981 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.667 08:29:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.573 08:29:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.573 00:32:53.573 real 1m30.792s 00:32:53.573 user 5m32.939s 00:32:53.573 sys 0m16.542s 00:32:53.573 08:29:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.573 08:29:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:53.573 ************************************ 00:32:53.573 END TEST nvmf_perf 00:32:53.573 ************************************ 00:32:53.573 08:29:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:53.573 08:29:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:53.573 08:29:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:53.573 08:29:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:53.573 08:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:53.573 ************************************ 00:32:53.573 START TEST nvmf_fio_host 00:32:53.573 ************************************ 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:53.573 * Looking for test storage... 00:32:53.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.573 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:53.574 08:29:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.477 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:55.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:55.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:55.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:55.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.478 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:55.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:32:55.736 00:32:55.736 --- 10.0.0.2 ping statistics --- 00:32:55.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.736 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:32:55.736 00:32:55.736 --- 10.0.0.1 ping statistics --- 00:32:55.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.736 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=35922 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 35922 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 35922 ']' 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:55.736 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.736 [2024-07-21 08:29:05.265112] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:32:55.736 [2024-07-21 08:29:05.265206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.736 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.736 [2024-07-21 08:29:05.336062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.994 [2024-07-21 08:29:05.428531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.994 [2024-07-21 08:29:05.428583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.994 [2024-07-21 08:29:05.428609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.994 [2024-07-21 08:29:05.428636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.994 [2024-07-21 08:29:05.428648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.994 [2024-07-21 08:29:05.428722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.994 [2024-07-21 08:29:05.428780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.994 [2024-07-21 08:29:05.428844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.994 [2024-07-21 08:29:05.428846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.994 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:55.994 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:32:55.994 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:56.252 [2024-07-21 08:29:05.774957] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.252 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:56.252 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:56.253 08:29:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.253 08:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:56.510 Malloc1 00:32:56.511 08:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.768 08:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:57.025 08:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.281 [2024-07-21 08:29:06.796857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.281 08:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:57.538 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:57.539 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:57.539 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:57.539 08:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:57.795 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:57.795 fio-3.35 00:32:57.795 Starting 1 thread 00:32:57.795 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.354 00:33:00.354 test: (groupid=0, jobs=1): err= 0: pid=36311: Sun Jul 21 08:29:09 2024 00:33:00.354 read: IOPS=8786, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec) 00:33:00.354 slat (nsec): min=1969, max=112083, avg=2622.66, stdev=1509.60 00:33:00.354 clat (usec): min=2145, max=13684, avg=7974.91, stdev=617.40 00:33:00.354 lat (usec): min=2168, max=13687, avg=7977.53, stdev=617.32 00:33:00.354 clat percentiles (usec): 00:33:00.354 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:33:00.354 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:33:00.354 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:33:00.354 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[10945], 99.95th=[12387], 00:33:00.354 | 99.99th=[13566] 00:33:00.354 bw ( KiB/s): min=33840, max=35968, per=99.97%, avg=35136.00, stdev=909.14, samples=4 00:33:00.354 iops : min= 8460, max= 8992, avg=8784.00, stdev=227.29, samples=4 00:33:00.354 write: IOPS=8792, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec); 0 zone resets 00:33:00.354 slat (nsec): min=2070, max=95520, avg=2706.69, stdev=1245.34 00:33:00.354 clat (usec): min=1547, max=13428, avg=6495.11, stdev=541.88 00:33:00.354 lat (usec): min=1553, max=13431, avg=6497.82, stdev=541.87 00:33:00.354 clat percentiles (usec): 00:33:00.354 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:33:00.354 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:33:00.354 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:33:00.354 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[10945], 99.95th=[11731], 00:33:00.354 | 99.99th=[12649] 00:33:00.354 bw ( KiB/s): min=34880, max=35608, per=100.00%, avg=35170.00, stdev=333.22, samples=4 00:33:00.354 iops : min= 8720, max= 8902, avg=8792.50, stdev=83.30, samples=4 00:33:00.354 lat (msec) : 2=0.02%, 4=0.11%, 10=99.71%, 20=0.16% 00:33:00.354 cpu : usr=57.98%, sys=39.23%, ctx=79, majf=0, minf=34 00:33:00.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:00.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:00.354 issued rwts: total=17634,17646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:00.354 00:33:00.354 Run status group 0 (all jobs): 00:33:00.354 READ: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.2MB), run=2007-2007msec 00:33:00.354 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.3MB), run=2007-2007msec 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:00.355 08:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:00.355 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:00.355 fio-3.35 00:33:00.355 Starting 1 thread 00:33:00.355 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.883 00:33:02.883 test: (groupid=0, jobs=1): err= 0: pid=36647: Sun Jul 21 08:29:12 2024 00:33:02.883 read: IOPS=8228, BW=129MiB/s (135MB/s)(258MiB/2008msec) 00:33:02.883 slat (usec): min=2, max=109, avg= 3.78, stdev= 1.93 00:33:02.883 clat (usec): min=2465, max=16238, avg=8932.76, stdev=2033.52 00:33:02.883 lat (usec): min=2469, max=16242, avg=8936.54, stdev=2033.53 00:33:02.883 clat percentiles (usec): 00:33:02.883 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7177], 00:33:02.883 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:33:02.883 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11600], 95.00th=[12256], 00:33:02.883 | 99.00th=[14484], 99.50th=[15139], 99.90th=[15926], 99.95th=[15926], 00:33:02.883 | 99.99th=[16188] 00:33:02.883 bw ( KiB/s): min=57664, max=76000, per=51.11%, avg=67280.00, stdev=8780.08, samples=4 00:33:02.883 iops : min= 3604, max= 4750, avg=4205.00, stdev=548.75, samples=4 00:33:02.883 write: IOPS=4839, BW=75.6MiB/s (79.3MB/s)(138MiB/1828msec); 0 zone resets 00:33:02.883 slat (usec): min=30, max=192, avg=34.67, stdev= 6.27 00:33:02.883 clat (usec): min=5535, max=18928, avg=11638.63, stdev=2027.31 00:33:02.883 lat (usec): min=5568, max=18960, avg=11673.30, stdev=2027.35 00:33:02.883 clat percentiles (usec): 00:33:02.883 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:33:02.883 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:33:02.883 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15139], 00:33:02.883 | 99.00th=[16581], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:33:02.883 | 99.99th=[19006] 00:33:02.883 bw ( KiB/s): min=60992, max=79200, per=90.71%, avg=70232.00, stdev=8839.78, samples=4 00:33:02.883 iops : min= 3812, max= 4950, avg=4389.50, stdev=552.49, samples=4 00:33:02.883 lat (msec) : 4=0.13%, 10=55.23%, 20=44.63% 00:33:02.883 cpu : usr=75.24%, sys=22.57%, ctx=40, majf=0, minf=60 00:33:02.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:02.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.883 issued rwts: total=16522,8846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.883 00:33:02.883 Run status group 0 (all jobs): 00:33:02.883 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2008-2008msec 00:33:02.883 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=138MiB (145MB), run=1828-1828msec 00:33:02.883 08:29:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:03.141 08:29:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:33:06.418 Nvme0n1 00:33:06.418 08:29:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cead5cfc-e65d-4060-8105-12b1957ffd9a 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cead5cfc-e65d-4060-8105-12b1957ffd9a 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cead5cfc-e65d-4060-8105-12b1957ffd9a 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:09.698 { 00:33:09.698 "uuid": "cead5cfc-e65d-4060-8105-12b1957ffd9a", 00:33:09.698 "name": "lvs_0", 00:33:09.698 "base_bdev": "Nvme0n1", 00:33:09.698 "total_data_clusters": 930, 00:33:09.698 "free_clusters": 930, 00:33:09.698 "block_size": 512, 00:33:09.698 "cluster_size": 1073741824 00:33:09.698 } 00:33:09.698 ]' 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cead5cfc-e65d-4060-8105-12b1957ffd9a") .free_clusters' 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cead5cfc-e65d-4060-8105-12b1957ffd9a") .cluster_size' 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:33:09.698 952320 00:33:09.698 08:29:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:09.955 e4586d61-803b-4da2-a80a-385de30d9f6d 00:33:09.955 08:29:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:10.211 08:29:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:10.468 08:29:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:10.724 08:29:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.981 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:10.981 fio-3.35 00:33:10.981 Starting 1 thread 00:33:10.981 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.504 00:33:13.504 test: (groupid=0, jobs=1): err= 0: pid=37930: Sun Jul 21 08:29:22 2024 00:33:13.504 read: IOPS=5830, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2008msec) 00:33:13.504 slat (nsec): min=1974, max=174231, avg=2766.60, stdev=2696.16 00:33:13.504 clat (usec): min=1089, max=171588, avg=12057.24, stdev=11798.19 00:33:13.504 lat (usec): min=1092, max=171631, avg=12060.00, stdev=11798.54 00:33:13.504 clat percentiles (msec): 00:33:13.504 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:33:13.504 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:33:13.504 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:33:13.504 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:33:13.504 | 99.99th=[ 171] 00:33:13.504 bw ( KiB/s): min=16120, max=25776, per=99.82%, avg=23278.00, stdev=4772.70, samples=4 00:33:13.505 iops : min= 4030, max= 6444, avg=5819.50, stdev=1193.17, samples=4 00:33:13.505 write: IOPS=5812, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2008msec); 0 zone resets 00:33:13.505 slat (usec): min=2, max=147, avg= 2.90, stdev= 2.21 00:33:13.505 clat (usec): min=305, max=169466, avg=9728.02, stdev=11082.56 00:33:13.505 lat (usec): min=308, max=169473, avg=9730.92, stdev=11082.94 00:33:13.505 clat percentiles (msec): 00:33:13.505 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:33:13.505 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:33:13.505 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:33:13.505 | 99.00th=[ 12], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:33:13.505 | 99.99th=[ 169] 00:33:13.505 bw ( KiB/s): min=17192, max=25280, per=99.89%, avg=23226.00, stdev=4022.78, samples=4 00:33:13.505 iops : min= 4298, max= 6320, avg=5806.50, stdev=1005.69, samples=4 00:33:13.505 lat (usec) : 500=0.01%, 750=0.01% 00:33:13.505 lat (msec) : 2=0.03%, 4=0.12%, 10=50.33%, 20=48.96%, 250=0.55% 00:33:13.505 cpu : usr=54.76%, sys=43.00%, ctx=131, majf=0, minf=34 00:33:13.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:13.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:13.505 issued rwts: total=11707,11672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:13.505 00:33:13.505 Run status group 0 (all jobs): 00:33:13.505 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2008-2008msec 00:33:13.505 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2008-2008msec 00:33:13.505 08:29:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:13.505 08:29:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=52b913f0-7d86-4c8c-841d-5d0c540eda9d 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 52b913f0-7d86-4c8c-841d-5d0c540eda9d 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=52b913f0-7d86-4c8c-841d-5d0c540eda9d 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:14.878 { 00:33:14.878 "uuid": "cead5cfc-e65d-4060-8105-12b1957ffd9a", 00:33:14.878 "name": "lvs_0", 00:33:14.878 "base_bdev": "Nvme0n1", 00:33:14.878 "total_data_clusters": 930, 00:33:14.878 "free_clusters": 0, 00:33:14.878 "block_size": 512, 00:33:14.878 "cluster_size": 1073741824 00:33:14.878 }, 00:33:14.878 { 00:33:14.878 "uuid": "52b913f0-7d86-4c8c-841d-5d0c540eda9d", 00:33:14.878 "name": "lvs_n_0", 00:33:14.878 "base_bdev": "e4586d61-803b-4da2-a80a-385de30d9f6d", 00:33:14.878 "total_data_clusters": 237847, 00:33:14.878 "free_clusters": 237847, 00:33:14.878 "block_size": 512, 00:33:14.878 "cluster_size": 4194304 00:33:14.878 } 00:33:14.878 ]' 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="52b913f0-7d86-4c8c-841d-5d0c540eda9d") .free_clusters' 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="52b913f0-7d86-4c8c-841d-5d0c540eda9d") .cluster_size' 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:33:14.878 951388 00:33:14.878 08:29:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:15.818 a8667abe-7058-4800-b4a1-a9d9c47e26ee 00:33:15.818 08:29:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:15.818 08:29:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:16.075 08:29:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:16.332 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:16.333 08:29:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.592 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:16.592 fio-3.35 00:33:16.592 Starting 1 thread 00:33:16.592 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.116 00:33:19.116 test: (groupid=0, jobs=1): err= 0: pid=38659: Sun Jul 21 08:29:28 2024 00:33:19.116 read: IOPS=5902, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2010msec) 00:33:19.117 slat (usec): min=2, max=173, avg= 2.70, stdev= 2.43 00:33:19.117 clat (usec): min=4474, max=20596, avg=11928.20, stdev=1040.46 00:33:19.117 lat (usec): min=4480, max=20599, avg=11930.89, stdev=1040.29 00:33:19.117 clat percentiles (usec): 00:33:19.117 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:33:19.117 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:33:19.117 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:33:19.117 | 99.00th=[14222], 99.50th=[14484], 99.90th=[18482], 99.95th=[18744], 00:33:19.117 | 99.99th=[19792] 00:33:19.117 bw ( KiB/s): min=22592, max=24072, per=99.97%, avg=23602.00, stdev=679.93, samples=4 00:33:19.117 iops : min= 5648, max= 6018, avg=5900.50, stdev=169.98, samples=4 00:33:19.117 write: IOPS=5900, BW=23.0MiB/s (24.2MB/s)(46.3MiB/2010msec); 0 zone resets 00:33:19.117 slat (usec): min=2, max=146, avg= 2.82, stdev= 1.78 00:33:19.117 clat (usec): min=2217, max=18755, avg=9641.77, stdev=900.12 00:33:19.117 lat (usec): min=2225, max=18757, avg=9644.59, stdev=900.04 00:33:19.117 clat percentiles (usec): 00:33:19.117 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:33:19.117 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:19.117 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:33:19.117 | 99.00th=[11600], 99.50th=[11994], 99.90th=[15795], 99.95th=[18482], 00:33:19.117 | 99.99th=[18744] 00:33:19.117 bw ( KiB/s): min=23456, max=23680, per=99.95%, avg=23590.00, stdev=94.97, samples=4 00:33:19.117 iops : min= 5864, max= 5920, avg=5897.50, stdev=23.74, samples=4 00:33:19.117 lat (msec) : 4=0.05%, 10=35.34%, 20=64.61%, 50=0.01% 00:33:19.117 cpu : usr=62.12%, sys=35.64%, ctx=68, majf=0, minf=34 00:33:19.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:19.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:19.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:19.117 issued rwts: total=11864,11860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:19.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:19.117 00:33:19.117 Run status group 0 (all jobs): 00:33:19.117 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:33:19.117 WRITE: bw=23.0MiB/s (24.2MB/s), 23.0MiB/s-23.0MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2010-2010msec 00:33:19.117 08:29:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:19.117 08:29:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:19.117 08:29:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:23.328 08:29:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:23.328 08:29:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:26.606 08:29:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:26.606 08:29:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:28.511 rmmod nvme_tcp 00:33:28.511 rmmod nvme_fabrics 00:33:28.511 rmmod nvme_keyring 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 35922 ']' 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 35922 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 35922 ']' 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 35922 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35922 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35922' 00:33:28.511 killing process with pid 35922 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 35922 00:33:28.511 08:29:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 35922 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:28.770 08:29:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.679 08:29:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:30.679 00:33:30.679 real 0m37.105s 00:33:30.679 user 2m22.429s 00:33:30.679 sys 0m7.075s 00:33:30.679 08:29:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:30.679 08:29:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.679 ************************************ 00:33:30.679 END TEST nvmf_fio_host 00:33:30.679 ************************************ 00:33:30.679 08:29:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:30.679 08:29:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.679 08:29:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:30.679 08:29:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.679 08:29:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:30.679 ************************************ 00:33:30.679 START TEST nvmf_failover 00:33:30.679 ************************************ 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.679 * Looking for test storage... 00:33:30.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:30.679 08:29:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:32.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:32.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:32.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:32.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:32.577 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.578 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:32.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:33:32.835 00:33:32.835 --- 10.0.0.2 ping statistics --- 00:33:32.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.835 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:33:32.835 00:33:32.835 --- 10.0.0.1 ping statistics --- 00:33:32.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.835 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=41937 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 41937 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 41937 ']' 00:33:32.835 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.836 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.836 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.836 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.836 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:32.836 [2024-07-21 08:29:42.388357] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:33:32.836 [2024-07-21 08:29:42.388448] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.836 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.836 [2024-07-21 08:29:42.454660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:33.093 [2024-07-21 08:29:42.540660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.093 [2024-07-21 08:29:42.540713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.093 [2024-07-21 08:29:42.540736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.093 [2024-07-21 08:29:42.540746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.093 [2024-07-21 08:29:42.540756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.093 [2024-07-21 08:29:42.542632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.093 [2024-07-21 08:29:42.542707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.093 [2024-07-21 08:29:42.542711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.093 08:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:33.352 [2024-07-21 08:29:42.890161] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.352 08:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:33.609 Malloc0 00:33:33.609 08:29:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:33.866 08:29:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:34.124 08:29:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.381 [2024-07-21 08:29:43.907136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.381 08:29:43 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:34.638 [2024-07-21 08:29:44.151824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:34.638 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:34.896 [2024-07-21 08:29:44.408692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=42196 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 42196 /var/tmp/bdevperf.sock 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 42196 ']' 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:34.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.896 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:35.153 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:35.153 08:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:35.153 08:29:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.717 NVMe0n1 00:33:35.717 08:29:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.717 00:33:35.975 08:29:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=42327 00:33:35.975 08:29:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:35.975 08:29:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:36.911 08:29:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.169 [2024-07-21 08:29:46.588970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.169 [2024-07-21 08:29:46.589178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.170 [2024-07-21 08:29:46.589190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d9140 is same with the state(5) to be set 00:33:37.170 08:29:46 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:40.451 08:29:49 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:40.451 00:33:40.451 08:29:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:40.710 08:29:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:44.025 08:29:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:44.025 [2024-07-21 08:29:53.507528] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.025 08:29:53 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:44.960 08:29:54 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:45.217 [2024-07-21 08:29:54.768032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 [2024-07-21 08:29:54.768185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(5) to be set 00:33:45.217 08:29:54 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 42327 00:33:51.786 0 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 42196 ']' 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42196' 00:33:51.786 killing process with pid 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 42196 00:33:51.786 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:51.786 [2024-07-21 08:29:44.466295] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:33:51.786 [2024-07-21 08:29:44.466379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42196 ] 00:33:51.786 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.786 [2024-07-21 08:29:44.525326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.786 [2024-07-21 08:29:44.614017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.786 Running I/O for 15 seconds... 00:33:51.786 [2024-07-21 08:29:46.589462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.589980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.589993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.786 [2024-07-21 08:29:46.590383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.786 [2024-07-21 08:29:46.590398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.787 [2024-07-21 08:29:46.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.590979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.590992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.787 [2024-07-21 08:29:46.591640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.787 [2024-07-21 08:29:46.591657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.591981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.591996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.788 [2024-07-21 08:29:46.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.788 [2024-07-21 08:29:46.592912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.592927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.592947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.592962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.592976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.592991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:46.593004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:46.593033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:46.593236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d06c0 is same with the state(5) to be set 00:33:51.789 [2024-07-21 08:29:46.593267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.789 [2024-07-21 08:29:46.593278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.789 [2024-07-21 08:29:46.593290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:33:51.789 [2024-07-21 08:29:46.593302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593361] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21d06c0 was disconnected and freed. reset controller. 00:33:51.789 [2024-07-21 08:29:46.593380] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:51.789 [2024-07-21 08:29:46.593413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.789 [2024-07-21 08:29:46.593431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.789 [2024-07-21 08:29:46.593459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.789 [2024-07-21 08:29:46.593486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.789 [2024-07-21 08:29:46.593511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:46.593524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.789 [2024-07-21 08:29:46.596792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.789 [2024-07-21 08:29:46.596829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219c830 (9): Bad file descriptor 00:33:51.789 [2024-07-21 08:29:46.631726] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:51.789 [2024-07-21 08:29:50.258868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.258954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.789 [2024-07-21 08:29:50.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.789 [2024-07-21 08:29:50.259669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.789 [2024-07-21 08:29:50.259682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.259982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.259997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.790 [2024-07-21 08:29:50.260442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.790 [2024-07-21 08:29:50.260720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.790 [2024-07-21 08:29:50.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.260965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.260980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.791 [2024-07-21 08:29:50.261437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.791 [2024-07-21 08:29:50.261465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.791 [2024-07-21 08:29:50.261494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.791 [2024-07-21 08:29:50.261522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.791 [2024-07-21 08:29:50.261550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.261974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.791 [2024-07-21 08:29:50.262016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.791 [2024-07-21 08:29:50.262031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.792 [2024-07-21 08:29:50.262275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.792 [2024-07-21 08:29:50.262303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.792 [2024-07-21 08:29:50.262332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:50.262785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc300 is same with the state(5) to be set 00:33:51.792 [2024-07-21 08:29:50.262820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.792 [2024-07-21 08:29:50.262833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.792 [2024-07-21 08:29:50.262844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:33:51.792 [2024-07-21 08:29:50.262857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.262923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21cc300 was disconnected and freed. reset controller. 00:33:51.792 [2024-07-21 08:29:50.262942] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:51.792 [2024-07-21 08:29:50.262975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.792 [2024-07-21 08:29:50.262993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.263008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.792 [2024-07-21 08:29:50.263021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.263035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.792 [2024-07-21 08:29:50.263048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.263061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.792 [2024-07-21 08:29:50.263073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:50.263086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.792 [2024-07-21 08:29:50.266350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.792 [2024-07-21 08:29:50.266389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219c830 (9): Bad file descriptor 00:33:51.792 [2024-07-21 08:29:50.343208] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:51.792 [2024-07-21 08:29:54.768739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.768982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.768999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.769013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.792 [2024-07-21 08:29:54.769027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.792 [2024-07-21 08:29:54.769041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.793 [2024-07-21 08:29:54.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.769975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.769994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.793 [2024-07-21 08:29:54.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.793 [2024-07-21 08:29:54.770313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.770974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.770988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.794 [2024-07-21 08:29:54.771402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.794 [2024-07-21 08:29:54.771416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.771971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.771985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.795 [2024-07-21 08:29:54.772304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16888 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16904 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16912 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16920 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16936 len:8 PRP1 0x0 PRP2 0x0 00:33:51.795 [2024-07-21 08:29:54.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.795 [2024-07-21 08:29:54.772697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.795 [2024-07-21 08:29:54.772707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.795 [2024-07-21 08:29:54.772723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16232 len:8 PRP1 0x0 PRP2 0x0 00:33:51.796 [2024-07-21 08:29:54.772735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.772748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.796 [2024-07-21 08:29:54.772759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.796 [2024-07-21 08:29:54.772770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16240 len:8 PRP1 0x0 PRP2 0x0 00:33:51.796 [2024-07-21 08:29:54.772782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.772838] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21cc300 was disconnected and freed. reset controller. 00:33:51.796 [2024-07-21 08:29:54.772856] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:51.796 [2024-07-21 08:29:54.772895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.796 [2024-07-21 08:29:54.772917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.772932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.796 [2024-07-21 08:29:54.772945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.772959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.796 [2024-07-21 08:29:54.772971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.772985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.796 [2024-07-21 08:29:54.772997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.796 [2024-07-21 08:29:54.773010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.796 [2024-07-21 08:29:54.773047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219c830 (9): Bad file descriptor 00:33:51.796 [2024-07-21 08:29:54.776349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.796 [2024-07-21 08:29:54.854539] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:51.796 00:33:51.796 Latency(us) 00:33:51.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.796 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.796 Verification LBA range: start 0x0 length 0x4000 00:33:51.796 NVMe0n1 : 15.01 8556.93 33.43 484.84 0.00 14129.33 558.27 19418.07 00:33:51.796 =================================================================================================================== 00:33:51.796 Total : 8556.93 33.43 484.84 0.00 14129.33 558.27 19418.07 00:33:51.796 Received shutdown signal, test time was about 15.000000 seconds 00:33:51.796 00:33:51.796 Latency(us) 00:33:51.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.796 =================================================================================================================== 00:33:51.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=44214 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 44214 /var/tmp/bdevperf.sock 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 44214 ']' 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:51.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.796 08:30:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:51.796 08:30:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.796 08:30:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:51.796 08:30:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:51.796 [2024-07-21 08:30:01.287975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:51.796 08:30:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:52.053 [2024-07-21 08:30:01.552736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:52.053 08:30:01 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:52.660 NVMe0n1 00:33:52.660 08:30:01 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:52.919 00:33:52.919 08:30:02 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:53.176 00:33:53.176 08:30:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:53.176 08:30:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:53.433 08:30:02 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:53.690 08:30:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:56.973 08:30:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:56.973 08:30:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:56.973 08:30:06 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=44943 00:33:56.973 08:30:06 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:56.973 08:30:06 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 44943 00:33:58.346 0 00:33:58.346 08:30:07 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:58.346 [2024-07-21 08:30:00.793328] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:33:58.346 [2024-07-21 08:30:00.793418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44214 ] 00:33:58.346 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.346 [2024-07-21 08:30:00.855136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.346 [2024-07-21 08:30:00.943626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.346 [2024-07-21 08:30:03.178633] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:58.346 [2024-07-21 08:30:03.178705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.346 [2024-07-21 08:30:03.178727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.346 [2024-07-21 08:30:03.178743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.346 [2024-07-21 08:30:03.178757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.346 [2024-07-21 08:30:03.178771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.346 [2024-07-21 08:30:03.178784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.346 [2024-07-21 08:30:03.178797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.346 [2024-07-21 08:30:03.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.346 [2024-07-21 08:30:03.178823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.346 [2024-07-21 08:30:03.178865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.346 [2024-07-21 08:30:03.178897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5e830 (9): Bad file descriptor 00:33:58.346 [2024-07-21 08:30:03.191202] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:58.346 Running I/O for 1 seconds... 00:33:58.346 00:33:58.346 Latency(us) 00:33:58.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.346 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:58.346 Verification LBA range: start 0x0 length 0x4000 00:33:58.346 NVMe0n1 : 1.01 8219.72 32.11 0.00 0.00 15505.08 3058.35 13981.01 00:33:58.346 =================================================================================================================== 00:33:58.346 Total : 8219.72 32.11 0.00 0.00 15505.08 3058.35 13981.01 00:33:58.346 08:30:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:58.347 08:30:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:58.347 08:30:07 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:58.604 08:30:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:58.604 08:30:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:58.862 08:30:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.121 08:30:08 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 44214 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 44214 ']' 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 44214 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44214 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44214' 00:34:02.410 killing process with pid 44214 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 44214 00:34:02.410 08:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 44214 00:34:02.668 08:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:02.668 08:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:02.928 rmmod nvme_tcp 00:34:02.928 rmmod nvme_fabrics 00:34:02.928 rmmod nvme_keyring 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 41937 ']' 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 41937 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 41937 ']' 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 41937 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 41937 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 41937' 00:34:02.928 killing process with pid 41937 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 41937 00:34:02.928 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 41937 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.188 08:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.734 08:30:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:05.734 00:34:05.734 real 0m34.569s 00:34:05.734 user 2m2.079s 00:34:05.734 sys 0m5.737s 00:34:05.734 08:30:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.734 08:30:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:05.734 ************************************ 00:34:05.734 END TEST nvmf_failover 00:34:05.734 ************************************ 00:34:05.734 08:30:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:05.734 08:30:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:05.734 08:30:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:05.734 08:30:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.734 08:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:05.734 ************************************ 00:34:05.734 START TEST nvmf_host_discovery 00:34:05.734 ************************************ 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:05.734 * Looking for test storage... 00:34:05.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:05.734 08:30:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:07.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:07.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:07.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:07.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:34:07.632 00:34:07.632 --- 10.0.0.2 ping statistics --- 00:34:07.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.632 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:34:07.632 00:34:07.632 --- 10.0.0.1 ping statistics --- 00:34:07.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.632 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:07.632 08:30:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=48049 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 48049 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 48049 ']' 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.633 08:30:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.633 [2024-07-21 08:30:16.987997] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:34:07.633 [2024-07-21 08:30:16.988068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.633 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.633 [2024-07-21 08:30:17.050210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.633 [2024-07-21 08:30:17.133512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.633 [2024-07-21 08:30:17.133569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.633 [2024-07-21 08:30:17.133597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.633 [2024-07-21 08:30:17.133608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.633 [2024-07-21 08:30:17.133625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.633 [2024-07-21 08:30:17.133652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.633 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.889 [2024-07-21 08:30:17.263400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.889 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.889 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.890 [2024-07-21 08:30:17.271574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.890 null0 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.890 null1 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=48072 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 48072 /tmp/host.sock 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 48072 ']' 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:07.890 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.890 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.890 [2024-07-21 08:30:17.344855] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:34:07.890 [2024-07-21 08:30:17.344950] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48072 ] 00:34:07.890 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.890 [2024-07-21 08:30:17.408685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.890 [2024-07-21 08:30:17.502498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.159 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.417 [2024-07-21 08:30:17.909278] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.417 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:08.418 08:30:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:08.418 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:34:08.677 08:30:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:09.242 [2024-07-21 08:30:18.682295] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:09.242 [2024-07-21 08:30:18.682323] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:09.242 [2024-07-21 08:30:18.682356] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:09.242 [2024-07-21 08:30:18.810784] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:09.242 [2024-07-21 08:30:18.872333] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:09.242 [2024-07-21 08:30:18.872371] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:09.500 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:09.758 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:10.017 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.018 [2024-07-21 08:30:19.493885] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:10.018 [2024-07-21 08:30:19.494859] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:10.018 [2024-07-21 08:30:19.494909] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.018 [2024-07-21 08:30:19.621811] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:10.018 08:30:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:10.277 [2024-07-21 08:30:19.726430] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:10.277 [2024-07-21 08:30:19.726458] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:10.277 [2024-07-21 08:30:19.726469] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.217 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.217 [2024-07-21 08:30:20.721835] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:11.217 [2024-07-21 08:30:20.721885] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:11.217 [2024-07-21 08:30:20.721934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.217 [2024-07-21 08:30:20.721966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.217 [2024-07-21 08:30:20.721983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.217 [2024-07-21 08:30:20.721997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.218 [2024-07-21 08:30:20.722011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.218 [2024-07-21 08:30:20.722033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.218 [2024-07-21 08:30:20.722048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.218 [2024-07-21 08:30:20.722061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.218 [2024-07-21 08:30:20.722074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.218 [2024-07-21 08:30:20.731938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.218 [2024-07-21 08:30:20.741980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.742183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.742213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.742231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.742254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.742275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.742290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.742305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.742326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 [2024-07-21 08:30:20.752074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.752255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.752283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.752300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.752321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.752347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.752362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.752375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.752407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 [2024-07-21 08:30:20.762157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.762358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.762385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.762401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.762423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.762444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.762467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.762480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.762498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.218 [2024-07-21 08:30:20.772244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.218 [2024-07-21 08:30:20.772404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.772434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.772451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.772473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.772508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.772527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.772549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.772568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 [2024-07-21 08:30:20.782318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.782501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.782529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.782545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.782580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.782604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.782628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.782643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.782667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 [2024-07-21 08:30:20.792404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.792583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.792610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.792636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.792673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.792710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.792727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.792741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.792760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.218 [2024-07-21 08:30:20.802487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:11.218 [2024-07-21 08:30:20.802695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.218 [2024-07-21 08:30:20.802724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1604540 with addr=10.0.0.2, port=4420 00:34:11.218 [2024-07-21 08:30:20.802740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604540 is same with the state(5) to be set 00:34:11.218 [2024-07-21 08:30:20.802774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1604540 (9): Bad file descriptor 00:34:11.218 [2024-07-21 08:30:20.802798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:11.218 [2024-07-21 08:30:20.802812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:11.218 [2024-07-21 08:30:20.802825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:11.218 [2024-07-21 08:30:20.802843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.218 [2024-07-21 08:30:20.808585] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:11.218 [2024-07-21 08:30:20.808639] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.218 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:11.219 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.479 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:34:11.479 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.480 08:30:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.860 [2024-07-21 08:30:22.089757] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:12.860 [2024-07-21 08:30:22.089789] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:12.860 [2024-07-21 08:30:22.089812] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.860 [2024-07-21 08:30:22.176103] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:12.860 [2024-07-21 08:30:22.479318] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:12.860 [2024-07-21 08:30:22.479370] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.860 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 request: 00:34:13.120 { 00:34:13.120 "name": "nvme", 00:34:13.120 "trtype": "tcp", 00:34:13.120 "traddr": "10.0.0.2", 00:34:13.120 "adrfam": "ipv4", 00:34:13.120 "trsvcid": "8009", 00:34:13.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:13.120 "wait_for_attach": true, 00:34:13.120 "method": "bdev_nvme_start_discovery", 00:34:13.120 "req_id": 1 00:34:13.120 } 00:34:13.120 Got JSON-RPC error response 00:34:13.120 response: 00:34:13.120 { 00:34:13.120 "code": -17, 00:34:13.120 "message": "File exists" 00:34:13.120 } 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 request: 00:34:13.120 { 00:34:13.120 "name": "nvme_second", 00:34:13.120 "trtype": "tcp", 00:34:13.120 "traddr": "10.0.0.2", 00:34:13.120 "adrfam": "ipv4", 00:34:13.120 "trsvcid": "8009", 00:34:13.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:13.120 "wait_for_attach": true, 00:34:13.120 "method": "bdev_nvme_start_discovery", 00:34:13.120 "req_id": 1 00:34:13.120 } 00:34:13.120 Got JSON-RPC error response 00:34:13.120 response: 00:34:13.120 { 00:34:13.120 "code": -17, 00:34:13.120 "message": "File exists" 00:34:13.120 } 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.120 08:30:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.491 [2024-07-21 08:30:23.690772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-21 08:30:23.690825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1621430 with addr=10.0.0.2, port=8010 00:34:14.491 [2024-07-21 08:30:23.690857] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:14.491 [2024-07-21 08:30:23.690872] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:14.491 [2024-07-21 08:30:23.690886] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:15.422 [2024-07-21 08:30:24.693172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-21 08:30:24.693209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1621430 with addr=10.0.0.2, port=8010 00:34:15.422 [2024-07-21 08:30:24.693231] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:15.422 [2024-07-21 08:30:24.693246] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:15.422 [2024-07-21 08:30:24.693259] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:16.354 [2024-07-21 08:30:25.695437] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:16.354 request: 00:34:16.354 { 00:34:16.354 "name": "nvme_second", 00:34:16.354 "trtype": "tcp", 00:34:16.354 "traddr": "10.0.0.2", 00:34:16.354 "adrfam": "ipv4", 00:34:16.354 "trsvcid": "8010", 00:34:16.354 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:16.354 "wait_for_attach": false, 00:34:16.354 "attach_timeout_ms": 3000, 00:34:16.354 "method": "bdev_nvme_start_discovery", 00:34:16.354 "req_id": 1 00:34:16.354 } 00:34:16.354 Got JSON-RPC error response 00:34:16.354 response: 00:34:16.354 { 00:34:16.354 "code": -110, 00:34:16.354 "message": "Connection timed out" 00:34:16.354 } 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 48072 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:16.354 rmmod nvme_tcp 00:34:16.354 rmmod nvme_fabrics 00:34:16.354 rmmod nvme_keyring 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 48049 ']' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 48049 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 48049 ']' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 48049 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 48049 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48049' 00:34:16.354 killing process with pid 48049 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 48049 00:34:16.354 08:30:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 48049 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:16.614 08:30:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.516 08:30:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:18.516 00:34:18.516 real 0m13.252s 00:34:18.516 user 0m19.308s 00:34:18.516 sys 0m2.807s 00:34:18.516 08:30:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:18.516 08:30:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.516 ************************************ 00:34:18.516 END TEST nvmf_host_discovery 00:34:18.516 ************************************ 00:34:18.516 08:30:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:18.516 08:30:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:18.516 08:30:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:18.516 08:30:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:18.516 08:30:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.773 ************************************ 00:34:18.773 START TEST nvmf_host_multipath_status 00:34:18.773 ************************************ 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:18.773 * Looking for test storage... 00:34:18.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.773 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:18.774 08:30:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:20.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:20.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:20.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:20.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:20.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:34:20.671 00:34:20.671 --- 10.0.0.2 ping statistics --- 00:34:20.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.671 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:34:20.671 00:34:20.671 --- 10.0.0.1 ping statistics --- 00:34:20.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.671 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=51214 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 51214 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 51214 ']' 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:20.671 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:20.671 [2024-07-21 08:30:30.298880] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:34:20.671 [2024-07-21 08:30:30.298982] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.929 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.929 [2024-07-21 08:30:30.366662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:20.929 [2024-07-21 08:30:30.461871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.930 [2024-07-21 08:30:30.461933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.930 [2024-07-21 08:30:30.461949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.930 [2024-07-21 08:30:30.461963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.930 [2024-07-21 08:30:30.461975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.930 [2024-07-21 08:30:30.465636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.930 [2024-07-21 08:30:30.465671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=51214 00:34:21.187 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:21.444 [2024-07-21 08:30:30.875520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.444 08:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:21.706 Malloc0 00:34:21.706 08:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:22.002 08:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.259 08:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.519 [2024-07-21 08:30:31.966823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.519 08:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:22.777 [2024-07-21 08:30:32.263747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=51500 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 51500 /var/tmp/bdevperf.sock 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 51500 ']' 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:22.777 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:23.035 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.035 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:23.035 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:23.293 08:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:23.861 Nvme0n1 00:34:23.861 08:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:24.429 Nvme0n1 00:34:24.429 08:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:24.429 08:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:26.335 08:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:26.335 08:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:26.592 08:30:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:26.849 08:30:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:27.782 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:27.782 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:27.782 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.782 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.039 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.039 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:28.039 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.039 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.296 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.296 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.296 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.296 08:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:28.553 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.553 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:28.553 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.553 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:28.810 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.810 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:28.810 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.810 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:29.068 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.068 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:29.068 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.068 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.326 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.326 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:29.326 08:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:29.584 08:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:29.844 08:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:30.781 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:30.781 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:30.781 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.781 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:31.040 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.040 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:31.040 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.040 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.297 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.297 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.297 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.297 08:30:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.556 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.556 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.556 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.556 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.814 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.814 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:31.814 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.814 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:32.072 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.072 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:32.072 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.072 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.332 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.332 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:32.332 08:30:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:32.590 08:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:32.848 08:30:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:33.783 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:33.783 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:33.783 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.783 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:34.041 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.041 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:34.041 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.041 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:34.298 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.298 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.298 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.298 08:30:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.555 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.555 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.555 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.555 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.813 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.813 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:34.813 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.813 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:35.071 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.071 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:35.071 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.071 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.327 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.327 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:35.327 08:30:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:35.597 08:30:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:35.856 08:30:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:36.789 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:36.789 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:36.789 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.789 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.082 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.082 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:37.082 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.082 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.339 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.339 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.339 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.339 08:30:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:37.596 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.596 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:37.596 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.596 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:37.854 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.854 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:37.854 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.854 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.111 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.111 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:38.111 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.111 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.369 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.369 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:38.369 08:30:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:38.627 08:30:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:38.885 08:30:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:39.819 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:39.819 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:39.819 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.819 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.077 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.077 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:40.077 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.077 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.357 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.357 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.357 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.357 08:30:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.614 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.614 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.614 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.614 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:40.871 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.871 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:40.871 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.871 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.129 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.129 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:41.129 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.129 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.386 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.386 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:41.386 08:30:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:41.643 08:30:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:41.902 08:30:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:42.890 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:42.890 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:42.890 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.890 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.148 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:43.148 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:43.148 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.148 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.407 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.407 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.407 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.407 08:30:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:43.664 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.664 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:43.664 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.664 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.921 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.921 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:43.921 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.921 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.178 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.178 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.178 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.178 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:44.436 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.436 08:30:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:44.694 08:30:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:44.694 08:30:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:44.952 08:30:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:45.211 08:30:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:46.168 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:46.168 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:46.168 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.168 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:46.426 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.426 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:46.426 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.426 08:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.684 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.684 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.684 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.684 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.943 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.943 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.944 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.944 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.202 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.202 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.202 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.202 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.460 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.460 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:47.460 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.460 08:30:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:47.718 08:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.718 08:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:47.718 08:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.976 08:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:48.234 08:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:49.170 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:49.170 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:49.170 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.170 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.428 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.428 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:49.428 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.428 08:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.685 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.685 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.685 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.685 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.958 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.958 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:49.958 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.958 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.215 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.215 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.215 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.215 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.473 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.473 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.473 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.473 08:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.730 08:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.730 08:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:50.730 08:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:50.987 08:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:51.246 08:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:52.209 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:52.209 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.209 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.209 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.466 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.466 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.466 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.466 08:31:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.723 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.723 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.723 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.723 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.979 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.980 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.980 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.980 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.246 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.246 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.246 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.246 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.503 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.503 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.503 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.503 08:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.760 08:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.760 08:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:53.760 08:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:54.016 08:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:54.275 08:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:55.209 08:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:55.209 08:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.209 08:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.209 08:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.466 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.467 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:55.467 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.467 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.724 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.724 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.724 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.724 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.981 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.981 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.981 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.981 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:56.239 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.239 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:56.239 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.239 08:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:56.496 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.496 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:56.496 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.496 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 51500 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 51500 ']' 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 51500 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51500 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51500' 00:34:56.756 killing process with pid 51500 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 51500 00:34:56.756 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 51500 00:34:57.026 Connection closed with partial response: 00:34:57.026 00:34:57.026 00:34:57.026 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 51500 00:34:57.026 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:57.026 [2024-07-21 08:30:32.329102] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:34:57.026 [2024-07-21 08:30:32.329186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51500 ] 00:34:57.026 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.026 [2024-07-21 08:30:32.387645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.026 [2024-07-21 08:30:32.473308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.026 Running I/O for 90 seconds... 00:34:57.026 [2024-07-21 08:30:48.097578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.097656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.097741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.097783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.097823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.097916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.097954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.097991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.026 [2024-07-21 08:30:48.098624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.098669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.098709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.098738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.098755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.099975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.099991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.100043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.100062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.100101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.100118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.100142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.100159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.100183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.026 [2024-07-21 08:30:48.100199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.026 [2024-07-21 08:30:48.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.027 [2024-07-21 08:30:48.100680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.100986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.101968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.101984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.102977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.102993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.027 [2024-07-21 08:30:48.103720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.027 [2024-07-21 08:30:48.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.103768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.103798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.103815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.103845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.103862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.103908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.103925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.103969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.103986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:30:48.104093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:30:48.104449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:30:48.104465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.737684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.737964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.737981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.738776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.738818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.738843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.738860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.740629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.740709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.028 [2024-07-21 08:31:03.740748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.028 [2024-07-21 08:31:03.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.028 [2024-07-21 08:31:03.740987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.741988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.742767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.742964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.742996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.743294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.743333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.743887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.743927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.743976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.029 [2024-07-21 08:31:03.744672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.744695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.029 [2024-07-21 08:31:03.744712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.029 [2024-07-21 08:31:03.746224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.746572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.746611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.746707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.746783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.746997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.747013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.747053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.748819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.748857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.748899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.748939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.748977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.748999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.749598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.749673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.749689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.750625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.750673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.750711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.750750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.750796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.750834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.750873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.750895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.750912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.751957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.751982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.752044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.752181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.030 [2024-07-21 08:31:03.752218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.752257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.752295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.752337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.030 [2024-07-21 08:31:03.752377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.030 [2024-07-21 08:31:03.752399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.752415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.752829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.752869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.752907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.752944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.752966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.752982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.753384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.753481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.753497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.754791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.754817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.754846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.754864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.754887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.754903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.754925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.754941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.754963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.754979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.755017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.755056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.755384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.755422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.755963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.755993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.756478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.756539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.756559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.031 [2024-07-21 08:31:03.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.757273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.031 [2024-07-21 08:31:03.757313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.031 [2024-07-21 08:31:03.757335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.757521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.757609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.757736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.757774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.757796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.757812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.759738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.759814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.759852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.759965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.759980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.760017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.760054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.760090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.760315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.760337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.762608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.762663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.762972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.762988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.763947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.763968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.763983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.764010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.764026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.764047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.032 [2024-07-21 08:31:03.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.765669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.032 [2024-07-21 08:31:03.765694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.032 [2024-07-21 08:31:03.765722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.765739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.765816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.765855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.765899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.765979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.765995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.766032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.766068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.766126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.766165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.766243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.766282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.766304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.766320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.767970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.767986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.768008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.768062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.768084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.768100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.769160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.769206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.769714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.769736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.769753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.770921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.770959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.770981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.771033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.771055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.771071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.033 [2024-07-21 08:31:03.771109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.771146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.771166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.033 [2024-07-21 08:31:03.771187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.033 [2024-07-21 08:31:03.771203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.771240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.771276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.771350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.771460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.771755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.771778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.771794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.772646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.772691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.772729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.772768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.772806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.772828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.772844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.774944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.774966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.774981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.775414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.775437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.775453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.034 [2024-07-21 08:31:03.777724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.034 [2024-07-21 08:31:03.777746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.034 [2024-07-21 08:31:03.777762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.777800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.777837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.777875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.777913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.777966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.777988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.778003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.778040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.778062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.778077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.778098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.778113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.778134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.778150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.778176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.778192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.779054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.779093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.780967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.780983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.781948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.781969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.781985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.782007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.782023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.782045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.782061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.782083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.782099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.782136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.782156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.782179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.782211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.783505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.783550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.783843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.783880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.783967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.035 [2024-07-21 08:31:03.784005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.035 [2024-07-21 08:31:03.784347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.035 [2024-07-21 08:31:03.784368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.784442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.784690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.784713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.784729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.785341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.785364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.785405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.785444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.785460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.785481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.785497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.785519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.787962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.788794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.788897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.788929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.036 [2024-07-21 08:31:03.791624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:57.036 [2024-07-21 08:31:03.791802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.036 [2024-07-21 08:31:03.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:57.037 [2024-07-21 08:31:03.791840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.037 [2024-07-21 08:31:03.791855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:57.037 [2024-07-21 08:31:03.791877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.037 [2024-07-21 08:31:03.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:57.037 Received shutdown signal, test time was about 32.347374 seconds 00:34:57.037 00:34:57.037 Latency(us) 00:34:57.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.037 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:57.037 Verification LBA range: start 0x0 length 0x4000 00:34:57.037 Nvme0n1 : 32.35 7992.46 31.22 0.00 0.00 15988.23 503.66 4026531.84 00:34:57.037 =================================================================================================================== 00:34:57.037 Total : 7992.46 31.22 0.00 0.00 15988.23 503.66 4026531.84 00:34:57.037 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:57.295 rmmod nvme_tcp 00:34:57.295 rmmod nvme_fabrics 00:34:57.295 rmmod nvme_keyring 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 51214 ']' 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 51214 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 51214 ']' 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 51214 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51214 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51214' 00:34:57.295 killing process with pid 51214 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 51214 00:34:57.295 08:31:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 51214 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:57.554 08:31:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.089 08:31:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:00.089 00:35:00.089 real 0m41.014s 00:35:00.089 user 2m3.909s 00:35:00.089 sys 0m10.390s 00:35:00.089 08:31:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:00.089 08:31:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.089 ************************************ 00:35:00.089 END TEST nvmf_host_multipath_status 00:35:00.089 ************************************ 00:35:00.089 08:31:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:00.089 08:31:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:00.089 08:31:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:00.089 08:31:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.089 08:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.089 ************************************ 00:35:00.089 START TEST nvmf_discovery_remove_ifc 00:35:00.089 ************************************ 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:00.089 * Looking for test storage... 00:35:00.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:00.089 08:31:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.991 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.991 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.991 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:01.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:01.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:01.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:01.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:35:01.992 00:35:01.992 --- 10.0.0.2 ping statistics --- 00:35:01.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.992 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:35:01.992 00:35:01.992 --- 10.0.0.1 ping statistics --- 00:35:01.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.992 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=57564 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 57564 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 57564 ']' 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:01.992 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.993 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:01.993 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.993 [2024-07-21 08:31:11.453943] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:35:01.993 [2024-07-21 08:31:11.454023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.993 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.993 [2024-07-21 08:31:11.526935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.993 [2024-07-21 08:31:11.614633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.993 [2024-07-21 08:31:11.614702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.993 [2024-07-21 08:31:11.614715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.993 [2024-07-21 08:31:11.614726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.993 [2024-07-21 08:31:11.614756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.993 [2024-07-21 08:31:11.614781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.250 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.250 [2024-07-21 08:31:11.758652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.250 [2024-07-21 08:31:11.766851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:02.250 null0 00:35:02.250 [2024-07-21 08:31:11.798780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=57714 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 57714 /tmp/host.sock 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 57714 ']' 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:02.251 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.251 08:31:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.251 [2024-07-21 08:31:11.868236] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:35:02.251 [2024-07-21 08:31:11.868312] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57714 ] 00:35:02.509 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.509 [2024-07-21 08:31:11.926408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.509 [2024-07-21 08:31:12.010249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.509 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.768 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.768 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:02.768 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.768 08:31:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.715 [2024-07-21 08:31:13.185791] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:03.715 [2024-07-21 08:31:13.185833] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:03.715 [2024-07-21 08:31:13.185857] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:03.715 [2024-07-21 08:31:13.312294] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:03.972 [2024-07-21 08:31:13.490472] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:03.972 [2024-07-21 08:31:13.490543] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:03.972 [2024-07-21 08:31:13.490588] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:03.972 [2024-07-21 08:31:13.490626] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:03.972 [2024-07-21 08:31:13.490678] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:03.972 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.972 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:03.972 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.972 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.972 [2024-07-21 08:31:13.495040] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1208300 was disconnected and freed. delete nvme_qpair. 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.973 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.231 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.231 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.231 08:31:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.169 08:31:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:06.105 08:31:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:07.492 08:31:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:08.429 08:31:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:09.364 08:31:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.364 [2024-07-21 08:31:18.931362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:09.364 [2024-07-21 08:31:18.931427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:09.364 [2024-07-21 08:31:18.931450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.364 [2024-07-21 08:31:18.931469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:09.364 [2024-07-21 08:31:18.931484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.364 [2024-07-21 08:31:18.931499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:09.364 [2024-07-21 08:31:18.931514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.364 [2024-07-21 08:31:18.931529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:09.364 [2024-07-21 08:31:18.931543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.364 [2024-07-21 08:31:18.931558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:09.364 [2024-07-21 08:31:18.931573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.364 [2024-07-21 08:31:18.931587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ceb50 is same with the state(5) to be set 00:35:09.364 [2024-07-21 08:31:18.941381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ceb50 (9): Bad file descriptor 00:35:09.364 [2024-07-21 08:31:18.951429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.302 08:31:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.562 [2024-07-21 08:31:20.008657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:10.562 [2024-07-21 08:31:20.008718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ceb50 with addr=10.0.0.2, port=4420 00:35:10.562 [2024-07-21 08:31:20.008743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ceb50 is same with the state(5) to be set 00:35:10.562 [2024-07-21 08:31:20.008785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ceb50 (9): Bad file descriptor 00:35:10.562 [2024-07-21 08:31:20.009234] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:10.562 [2024-07-21 08:31:20.009266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:10.562 [2024-07-21 08:31:20.009282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:10.562 [2024-07-21 08:31:20.009297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:10.562 [2024-07-21 08:31:20.009327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:10.562 [2024-07-21 08:31:20.009344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:10.562 08:31:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.562 08:31:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:10.562 08:31:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:11.497 [2024-07-21 08:31:21.011856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:11.497 [2024-07-21 08:31:21.011926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:11.497 [2024-07-21 08:31:21.011940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:11.497 [2024-07-21 08:31:21.011954] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:11.497 [2024-07-21 08:31:21.011987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:11.497 [2024-07-21 08:31:21.012033] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:11.497 [2024-07-21 08:31:21.012096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.497 [2024-07-21 08:31:21.012118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.497 [2024-07-21 08:31:21.012136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.497 [2024-07-21 08:31:21.012149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.497 [2024-07-21 08:31:21.012163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.497 [2024-07-21 08:31:21.012176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.497 [2024-07-21 08:31:21.012189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.497 [2024-07-21 08:31:21.012207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.497 [2024-07-21 08:31:21.012222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:11.497 [2024-07-21 08:31:21.012234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.497 [2024-07-21 08:31:21.012247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:11.497 [2024-07-21 08:31:21.012350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cdf80 (9): Bad file descriptor 00:35:11.497 [2024-07-21 08:31:21.013370] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:11.497 [2024-07-21 08:31:21.013406] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.497 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.755 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:11.755 08:31:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:12.697 08:31:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:13.637 [2024-07-21 08:31:23.061776] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:13.637 [2024-07-21 08:31:23.061803] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:13.637 [2024-07-21 08:31:23.061825] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:13.637 [2024-07-21 08:31:23.149156] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:13.637 08:31:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:13.895 [2024-07-21 08:31:23.374766] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:13.895 [2024-07-21 08:31:23.374817] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:13.895 [2024-07-21 08:31:23.374851] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:13.895 [2024-07-21 08:31:23.374877] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:13.895 [2024-07-21 08:31:23.374892] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:13.895 [2024-07-21 08:31:23.381036] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11e5390 was disconnected and freed. delete nvme_qpair. 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 57714 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 57714 ']' 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 57714 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 57714 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57714' 00:35:14.832 killing process with pid 57714 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 57714 00:35:14.832 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 57714 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:15.091 rmmod nvme_tcp 00:35:15.091 rmmod nvme_fabrics 00:35:15.091 rmmod nvme_keyring 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 57564 ']' 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 57564 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 57564 ']' 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 57564 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 57564 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57564' 00:35:15.091 killing process with pid 57564 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 57564 00:35:15.091 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 57564 00:35:15.349 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:15.349 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:15.349 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:15.349 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:15.349 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:15.350 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.350 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:15.350 08:31:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.882 08:31:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:17.882 00:35:17.882 real 0m17.669s 00:35:17.882 user 0m25.633s 00:35:17.882 sys 0m3.004s 00:35:17.882 08:31:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:17.882 08:31:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.882 ************************************ 00:35:17.882 END TEST nvmf_discovery_remove_ifc 00:35:17.882 ************************************ 00:35:17.882 08:31:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:17.882 08:31:26 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:17.882 08:31:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:17.882 08:31:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:17.882 08:31:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:17.882 ************************************ 00:35:17.882 START TEST nvmf_identify_kernel_target 00:35:17.882 ************************************ 00:35:17.882 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:17.882 * Looking for test storage... 00:35:17.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:17.882 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:17.883 08:31:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:17.883 08:31:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:19.263 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:19.263 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:19.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.263 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:19.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.264 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.522 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.522 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.522 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:19.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:35:19.523 00:35:19.523 --- 10.0.0.2 ping statistics --- 00:35:19.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.523 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:35:19.523 00:35:19.523 --- 10.0.0.1 ping statistics --- 00:35:19.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.523 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:19.523 08:31:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.523 08:31:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.458 Waiting for block devices as requested 00:35:20.718 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:20.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.718 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.977 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.977 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.977 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:21.235 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:21.235 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.235 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.235 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:21.235 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:21.495 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:21.495 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:21.495 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:21.495 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:21.789 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.789 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:21.789 No valid GPT data, bailing 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:21.789 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:22.048 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:22.048 00:35:22.048 Discovery Log Number of Records 2, Generation counter 2 00:35:22.048 =====Discovery Log Entry 0====== 00:35:22.048 trtype: tcp 00:35:22.048 adrfam: ipv4 00:35:22.048 subtype: current discovery subsystem 00:35:22.048 treq: not specified, sq flow control disable supported 00:35:22.048 portid: 1 00:35:22.048 trsvcid: 4420 00:35:22.048 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:22.048 traddr: 10.0.0.1 00:35:22.048 eflags: none 00:35:22.048 sectype: none 00:35:22.048 =====Discovery Log Entry 1====== 00:35:22.048 trtype: tcp 00:35:22.048 adrfam: ipv4 00:35:22.048 subtype: nvme subsystem 00:35:22.048 treq: not specified, sq flow control disable supported 00:35:22.048 portid: 1 00:35:22.048 trsvcid: 4420 00:35:22.048 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:22.048 traddr: 10.0.0.1 00:35:22.048 eflags: none 00:35:22.048 sectype: none 00:35:22.048 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:22.048 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:22.048 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.048 ===================================================== 00:35:22.048 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:22.048 ===================================================== 00:35:22.048 Controller Capabilities/Features 00:35:22.048 ================================ 00:35:22.048 Vendor ID: 0000 00:35:22.048 Subsystem Vendor ID: 0000 00:35:22.048 Serial Number: 4fcd14867e4f3eba9909 00:35:22.048 Model Number: Linux 00:35:22.048 Firmware Version: 6.7.0-68 00:35:22.048 Recommended Arb Burst: 0 00:35:22.048 IEEE OUI Identifier: 00 00 00 00:35:22.048 Multi-path I/O 00:35:22.048 May have multiple subsystem ports: No 00:35:22.048 May have multiple controllers: No 00:35:22.048 Associated with SR-IOV VF: No 00:35:22.048 Max Data Transfer Size: Unlimited 00:35:22.048 Max Number of Namespaces: 0 00:35:22.048 Max Number of I/O Queues: 1024 00:35:22.048 NVMe Specification Version (VS): 1.3 00:35:22.048 NVMe Specification Version (Identify): 1.3 00:35:22.048 Maximum Queue Entries: 1024 00:35:22.048 Contiguous Queues Required: No 00:35:22.048 Arbitration Mechanisms Supported 00:35:22.048 Weighted Round Robin: Not Supported 00:35:22.048 Vendor Specific: Not Supported 00:35:22.048 Reset Timeout: 7500 ms 00:35:22.048 Doorbell Stride: 4 bytes 00:35:22.048 NVM Subsystem Reset: Not Supported 00:35:22.048 Command Sets Supported 00:35:22.048 NVM Command Set: Supported 00:35:22.048 Boot Partition: Not Supported 00:35:22.048 Memory Page Size Minimum: 4096 bytes 00:35:22.048 Memory Page Size Maximum: 4096 bytes 00:35:22.048 Persistent Memory Region: Not Supported 00:35:22.048 Optional Asynchronous Events Supported 00:35:22.048 Namespace Attribute Notices: Not Supported 00:35:22.048 Firmware Activation Notices: Not Supported 00:35:22.048 ANA Change Notices: Not Supported 00:35:22.048 PLE Aggregate Log Change Notices: Not Supported 00:35:22.048 LBA Status Info Alert Notices: Not Supported 00:35:22.048 EGE Aggregate Log Change Notices: Not Supported 00:35:22.048 Normal NVM Subsystem Shutdown event: Not Supported 00:35:22.048 Zone Descriptor Change Notices: Not Supported 00:35:22.048 Discovery Log Change Notices: Supported 00:35:22.048 Controller Attributes 00:35:22.048 128-bit Host Identifier: Not Supported 00:35:22.048 Non-Operational Permissive Mode: Not Supported 00:35:22.048 NVM Sets: Not Supported 00:35:22.048 Read Recovery Levels: Not Supported 00:35:22.048 Endurance Groups: Not Supported 00:35:22.048 Predictable Latency Mode: Not Supported 00:35:22.048 Traffic Based Keep ALive: Not Supported 00:35:22.048 Namespace Granularity: Not Supported 00:35:22.048 SQ Associations: Not Supported 00:35:22.048 UUID List: Not Supported 00:35:22.048 Multi-Domain Subsystem: Not Supported 00:35:22.048 Fixed Capacity Management: Not Supported 00:35:22.048 Variable Capacity Management: Not Supported 00:35:22.048 Delete Endurance Group: Not Supported 00:35:22.048 Delete NVM Set: Not Supported 00:35:22.048 Extended LBA Formats Supported: Not Supported 00:35:22.048 Flexible Data Placement Supported: Not Supported 00:35:22.048 00:35:22.048 Controller Memory Buffer Support 00:35:22.048 ================================ 00:35:22.048 Supported: No 00:35:22.048 00:35:22.048 Persistent Memory Region Support 00:35:22.048 ================================ 00:35:22.048 Supported: No 00:35:22.048 00:35:22.048 Admin Command Set Attributes 00:35:22.048 ============================ 00:35:22.048 Security Send/Receive: Not Supported 00:35:22.048 Format NVM: Not Supported 00:35:22.048 Firmware Activate/Download: Not Supported 00:35:22.048 Namespace Management: Not Supported 00:35:22.048 Device Self-Test: Not Supported 00:35:22.048 Directives: Not Supported 00:35:22.048 NVMe-MI: Not Supported 00:35:22.048 Virtualization Management: Not Supported 00:35:22.048 Doorbell Buffer Config: Not Supported 00:35:22.048 Get LBA Status Capability: Not Supported 00:35:22.048 Command & Feature Lockdown Capability: Not Supported 00:35:22.048 Abort Command Limit: 1 00:35:22.048 Async Event Request Limit: 1 00:35:22.048 Number of Firmware Slots: N/A 00:35:22.048 Firmware Slot 1 Read-Only: N/A 00:35:22.048 Firmware Activation Without Reset: N/A 00:35:22.048 Multiple Update Detection Support: N/A 00:35:22.048 Firmware Update Granularity: No Information Provided 00:35:22.048 Per-Namespace SMART Log: No 00:35:22.048 Asymmetric Namespace Access Log Page: Not Supported 00:35:22.048 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:22.049 Command Effects Log Page: Not Supported 00:35:22.049 Get Log Page Extended Data: Supported 00:35:22.049 Telemetry Log Pages: Not Supported 00:35:22.049 Persistent Event Log Pages: Not Supported 00:35:22.049 Supported Log Pages Log Page: May Support 00:35:22.049 Commands Supported & Effects Log Page: Not Supported 00:35:22.049 Feature Identifiers & Effects Log Page:May Support 00:35:22.049 NVMe-MI Commands & Effects Log Page: May Support 00:35:22.049 Data Area 4 for Telemetry Log: Not Supported 00:35:22.049 Error Log Page Entries Supported: 1 00:35:22.049 Keep Alive: Not Supported 00:35:22.049 00:35:22.049 NVM Command Set Attributes 00:35:22.049 ========================== 00:35:22.049 Submission Queue Entry Size 00:35:22.049 Max: 1 00:35:22.049 Min: 1 00:35:22.049 Completion Queue Entry Size 00:35:22.049 Max: 1 00:35:22.049 Min: 1 00:35:22.049 Number of Namespaces: 0 00:35:22.049 Compare Command: Not Supported 00:35:22.049 Write Uncorrectable Command: Not Supported 00:35:22.049 Dataset Management Command: Not Supported 00:35:22.049 Write Zeroes Command: Not Supported 00:35:22.049 Set Features Save Field: Not Supported 00:35:22.049 Reservations: Not Supported 00:35:22.049 Timestamp: Not Supported 00:35:22.049 Copy: Not Supported 00:35:22.049 Volatile Write Cache: Not Present 00:35:22.049 Atomic Write Unit (Normal): 1 00:35:22.049 Atomic Write Unit (PFail): 1 00:35:22.049 Atomic Compare & Write Unit: 1 00:35:22.049 Fused Compare & Write: Not Supported 00:35:22.049 Scatter-Gather List 00:35:22.049 SGL Command Set: Supported 00:35:22.049 SGL Keyed: Not Supported 00:35:22.049 SGL Bit Bucket Descriptor: Not Supported 00:35:22.049 SGL Metadata Pointer: Not Supported 00:35:22.049 Oversized SGL: Not Supported 00:35:22.049 SGL Metadata Address: Not Supported 00:35:22.049 SGL Offset: Supported 00:35:22.049 Transport SGL Data Block: Not Supported 00:35:22.049 Replay Protected Memory Block: Not Supported 00:35:22.049 00:35:22.049 Firmware Slot Information 00:35:22.049 ========================= 00:35:22.049 Active slot: 0 00:35:22.049 00:35:22.049 00:35:22.049 Error Log 00:35:22.049 ========= 00:35:22.049 00:35:22.049 Active Namespaces 00:35:22.049 ================= 00:35:22.049 Discovery Log Page 00:35:22.049 ================== 00:35:22.049 Generation Counter: 2 00:35:22.049 Number of Records: 2 00:35:22.049 Record Format: 0 00:35:22.049 00:35:22.049 Discovery Log Entry 0 00:35:22.049 ---------------------- 00:35:22.049 Transport Type: 3 (TCP) 00:35:22.049 Address Family: 1 (IPv4) 00:35:22.049 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:22.049 Entry Flags: 00:35:22.049 Duplicate Returned Information: 0 00:35:22.049 Explicit Persistent Connection Support for Discovery: 0 00:35:22.049 Transport Requirements: 00:35:22.049 Secure Channel: Not Specified 00:35:22.049 Port ID: 1 (0x0001) 00:35:22.049 Controller ID: 65535 (0xffff) 00:35:22.049 Admin Max SQ Size: 32 00:35:22.049 Transport Service Identifier: 4420 00:35:22.049 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:22.049 Transport Address: 10.0.0.1 00:35:22.049 Discovery Log Entry 1 00:35:22.049 ---------------------- 00:35:22.049 Transport Type: 3 (TCP) 00:35:22.049 Address Family: 1 (IPv4) 00:35:22.049 Subsystem Type: 2 (NVM Subsystem) 00:35:22.049 Entry Flags: 00:35:22.049 Duplicate Returned Information: 0 00:35:22.049 Explicit Persistent Connection Support for Discovery: 0 00:35:22.049 Transport Requirements: 00:35:22.049 Secure Channel: Not Specified 00:35:22.049 Port ID: 1 (0x0001) 00:35:22.049 Controller ID: 65535 (0xffff) 00:35:22.049 Admin Max SQ Size: 32 00:35:22.049 Transport Service Identifier: 4420 00:35:22.049 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:22.049 Transport Address: 10.0.0.1 00:35:22.049 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.049 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.049 get_feature(0x01) failed 00:35:22.049 get_feature(0x02) failed 00:35:22.049 get_feature(0x04) failed 00:35:22.049 ===================================================== 00:35:22.049 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.049 ===================================================== 00:35:22.049 Controller Capabilities/Features 00:35:22.049 ================================ 00:35:22.049 Vendor ID: 0000 00:35:22.049 Subsystem Vendor ID: 0000 00:35:22.049 Serial Number: 8810993e5fdb41c8eb36 00:35:22.049 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:22.049 Firmware Version: 6.7.0-68 00:35:22.049 Recommended Arb Burst: 6 00:35:22.049 IEEE OUI Identifier: 00 00 00 00:35:22.049 Multi-path I/O 00:35:22.049 May have multiple subsystem ports: Yes 00:35:22.049 May have multiple controllers: Yes 00:35:22.049 Associated with SR-IOV VF: No 00:35:22.049 Max Data Transfer Size: Unlimited 00:35:22.049 Max Number of Namespaces: 1024 00:35:22.049 Max Number of I/O Queues: 128 00:35:22.049 NVMe Specification Version (VS): 1.3 00:35:22.049 NVMe Specification Version (Identify): 1.3 00:35:22.049 Maximum Queue Entries: 1024 00:35:22.049 Contiguous Queues Required: No 00:35:22.049 Arbitration Mechanisms Supported 00:35:22.049 Weighted Round Robin: Not Supported 00:35:22.049 Vendor Specific: Not Supported 00:35:22.049 Reset Timeout: 7500 ms 00:35:22.049 Doorbell Stride: 4 bytes 00:35:22.049 NVM Subsystem Reset: Not Supported 00:35:22.049 Command Sets Supported 00:35:22.049 NVM Command Set: Supported 00:35:22.049 Boot Partition: Not Supported 00:35:22.049 Memory Page Size Minimum: 4096 bytes 00:35:22.049 Memory Page Size Maximum: 4096 bytes 00:35:22.049 Persistent Memory Region: Not Supported 00:35:22.049 Optional Asynchronous Events Supported 00:35:22.049 Namespace Attribute Notices: Supported 00:35:22.049 Firmware Activation Notices: Not Supported 00:35:22.049 ANA Change Notices: Supported 00:35:22.049 PLE Aggregate Log Change Notices: Not Supported 00:35:22.049 LBA Status Info Alert Notices: Not Supported 00:35:22.049 EGE Aggregate Log Change Notices: Not Supported 00:35:22.049 Normal NVM Subsystem Shutdown event: Not Supported 00:35:22.049 Zone Descriptor Change Notices: Not Supported 00:35:22.049 Discovery Log Change Notices: Not Supported 00:35:22.049 Controller Attributes 00:35:22.049 128-bit Host Identifier: Supported 00:35:22.049 Non-Operational Permissive Mode: Not Supported 00:35:22.049 NVM Sets: Not Supported 00:35:22.049 Read Recovery Levels: Not Supported 00:35:22.049 Endurance Groups: Not Supported 00:35:22.049 Predictable Latency Mode: Not Supported 00:35:22.049 Traffic Based Keep ALive: Supported 00:35:22.049 Namespace Granularity: Not Supported 00:35:22.049 SQ Associations: Not Supported 00:35:22.049 UUID List: Not Supported 00:35:22.049 Multi-Domain Subsystem: Not Supported 00:35:22.049 Fixed Capacity Management: Not Supported 00:35:22.049 Variable Capacity Management: Not Supported 00:35:22.049 Delete Endurance Group: Not Supported 00:35:22.049 Delete NVM Set: Not Supported 00:35:22.049 Extended LBA Formats Supported: Not Supported 00:35:22.049 Flexible Data Placement Supported: Not Supported 00:35:22.049 00:35:22.049 Controller Memory Buffer Support 00:35:22.049 ================================ 00:35:22.049 Supported: No 00:35:22.049 00:35:22.049 Persistent Memory Region Support 00:35:22.049 ================================ 00:35:22.049 Supported: No 00:35:22.049 00:35:22.049 Admin Command Set Attributes 00:35:22.049 ============================ 00:35:22.049 Security Send/Receive: Not Supported 00:35:22.049 Format NVM: Not Supported 00:35:22.049 Firmware Activate/Download: Not Supported 00:35:22.049 Namespace Management: Not Supported 00:35:22.049 Device Self-Test: Not Supported 00:35:22.049 Directives: Not Supported 00:35:22.049 NVMe-MI: Not Supported 00:35:22.049 Virtualization Management: Not Supported 00:35:22.049 Doorbell Buffer Config: Not Supported 00:35:22.049 Get LBA Status Capability: Not Supported 00:35:22.049 Command & Feature Lockdown Capability: Not Supported 00:35:22.049 Abort Command Limit: 4 00:35:22.049 Async Event Request Limit: 4 00:35:22.049 Number of Firmware Slots: N/A 00:35:22.049 Firmware Slot 1 Read-Only: N/A 00:35:22.049 Firmware Activation Without Reset: N/A 00:35:22.049 Multiple Update Detection Support: N/A 00:35:22.049 Firmware Update Granularity: No Information Provided 00:35:22.049 Per-Namespace SMART Log: Yes 00:35:22.049 Asymmetric Namespace Access Log Page: Supported 00:35:22.049 ANA Transition Time : 10 sec 00:35:22.049 00:35:22.049 Asymmetric Namespace Access Capabilities 00:35:22.049 ANA Optimized State : Supported 00:35:22.049 ANA Non-Optimized State : Supported 00:35:22.049 ANA Inaccessible State : Supported 00:35:22.049 ANA Persistent Loss State : Supported 00:35:22.049 ANA Change State : Supported 00:35:22.049 ANAGRPID is not changed : No 00:35:22.049 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:22.049 00:35:22.049 ANA Group Identifier Maximum : 128 00:35:22.049 Number of ANA Group Identifiers : 128 00:35:22.050 Max Number of Allowed Namespaces : 1024 00:35:22.050 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:22.050 Command Effects Log Page: Supported 00:35:22.050 Get Log Page Extended Data: Supported 00:35:22.050 Telemetry Log Pages: Not Supported 00:35:22.050 Persistent Event Log Pages: Not Supported 00:35:22.050 Supported Log Pages Log Page: May Support 00:35:22.050 Commands Supported & Effects Log Page: Not Supported 00:35:22.050 Feature Identifiers & Effects Log Page:May Support 00:35:22.050 NVMe-MI Commands & Effects Log Page: May Support 00:35:22.050 Data Area 4 for Telemetry Log: Not Supported 00:35:22.050 Error Log Page Entries Supported: 128 00:35:22.050 Keep Alive: Supported 00:35:22.050 Keep Alive Granularity: 1000 ms 00:35:22.050 00:35:22.050 NVM Command Set Attributes 00:35:22.050 ========================== 00:35:22.050 Submission Queue Entry Size 00:35:22.050 Max: 64 00:35:22.050 Min: 64 00:35:22.050 Completion Queue Entry Size 00:35:22.050 Max: 16 00:35:22.050 Min: 16 00:35:22.050 Number of Namespaces: 1024 00:35:22.050 Compare Command: Not Supported 00:35:22.050 Write Uncorrectable Command: Not Supported 00:35:22.050 Dataset Management Command: Supported 00:35:22.050 Write Zeroes Command: Supported 00:35:22.050 Set Features Save Field: Not Supported 00:35:22.050 Reservations: Not Supported 00:35:22.050 Timestamp: Not Supported 00:35:22.050 Copy: Not Supported 00:35:22.050 Volatile Write Cache: Present 00:35:22.050 Atomic Write Unit (Normal): 1 00:35:22.050 Atomic Write Unit (PFail): 1 00:35:22.050 Atomic Compare & Write Unit: 1 00:35:22.050 Fused Compare & Write: Not Supported 00:35:22.050 Scatter-Gather List 00:35:22.050 SGL Command Set: Supported 00:35:22.050 SGL Keyed: Not Supported 00:35:22.050 SGL Bit Bucket Descriptor: Not Supported 00:35:22.050 SGL Metadata Pointer: Not Supported 00:35:22.050 Oversized SGL: Not Supported 00:35:22.050 SGL Metadata Address: Not Supported 00:35:22.050 SGL Offset: Supported 00:35:22.050 Transport SGL Data Block: Not Supported 00:35:22.050 Replay Protected Memory Block: Not Supported 00:35:22.050 00:35:22.050 Firmware Slot Information 00:35:22.050 ========================= 00:35:22.050 Active slot: 0 00:35:22.050 00:35:22.050 Asymmetric Namespace Access 00:35:22.050 =========================== 00:35:22.050 Change Count : 0 00:35:22.050 Number of ANA Group Descriptors : 1 00:35:22.050 ANA Group Descriptor : 0 00:35:22.050 ANA Group ID : 1 00:35:22.050 Number of NSID Values : 1 00:35:22.050 Change Count : 0 00:35:22.050 ANA State : 1 00:35:22.050 Namespace Identifier : 1 00:35:22.050 00:35:22.050 Commands Supported and Effects 00:35:22.050 ============================== 00:35:22.050 Admin Commands 00:35:22.050 -------------- 00:35:22.050 Get Log Page (02h): Supported 00:35:22.050 Identify (06h): Supported 00:35:22.050 Abort (08h): Supported 00:35:22.050 Set Features (09h): Supported 00:35:22.050 Get Features (0Ah): Supported 00:35:22.050 Asynchronous Event Request (0Ch): Supported 00:35:22.050 Keep Alive (18h): Supported 00:35:22.050 I/O Commands 00:35:22.050 ------------ 00:35:22.050 Flush (00h): Supported 00:35:22.050 Write (01h): Supported LBA-Change 00:35:22.050 Read (02h): Supported 00:35:22.050 Write Zeroes (08h): Supported LBA-Change 00:35:22.050 Dataset Management (09h): Supported 00:35:22.050 00:35:22.050 Error Log 00:35:22.050 ========= 00:35:22.050 Entry: 0 00:35:22.050 Error Count: 0x3 00:35:22.050 Submission Queue Id: 0x0 00:35:22.050 Command Id: 0x5 00:35:22.050 Phase Bit: 0 00:35:22.050 Status Code: 0x2 00:35:22.050 Status Code Type: 0x0 00:35:22.050 Do Not Retry: 1 00:35:22.311 Error Location: 0x28 00:35:22.311 LBA: 0x0 00:35:22.311 Namespace: 0x0 00:35:22.311 Vendor Log Page: 0x0 00:35:22.311 ----------- 00:35:22.311 Entry: 1 00:35:22.311 Error Count: 0x2 00:35:22.311 Submission Queue Id: 0x0 00:35:22.311 Command Id: 0x5 00:35:22.311 Phase Bit: 0 00:35:22.311 Status Code: 0x2 00:35:22.311 Status Code Type: 0x0 00:35:22.311 Do Not Retry: 1 00:35:22.311 Error Location: 0x28 00:35:22.311 LBA: 0x0 00:35:22.311 Namespace: 0x0 00:35:22.311 Vendor Log Page: 0x0 00:35:22.311 ----------- 00:35:22.311 Entry: 2 00:35:22.311 Error Count: 0x1 00:35:22.311 Submission Queue Id: 0x0 00:35:22.311 Command Id: 0x4 00:35:22.311 Phase Bit: 0 00:35:22.311 Status Code: 0x2 00:35:22.311 Status Code Type: 0x0 00:35:22.311 Do Not Retry: 1 00:35:22.311 Error Location: 0x28 00:35:22.311 LBA: 0x0 00:35:22.311 Namespace: 0x0 00:35:22.311 Vendor Log Page: 0x0 00:35:22.311 00:35:22.311 Number of Queues 00:35:22.311 ================ 00:35:22.311 Number of I/O Submission Queues: 128 00:35:22.311 Number of I/O Completion Queues: 128 00:35:22.311 00:35:22.311 ZNS Specific Controller Data 00:35:22.311 ============================ 00:35:22.311 Zone Append Size Limit: 0 00:35:22.311 00:35:22.311 00:35:22.311 Active Namespaces 00:35:22.311 ================= 00:35:22.311 get_feature(0x05) failed 00:35:22.311 Namespace ID:1 00:35:22.311 Command Set Identifier: NVM (00h) 00:35:22.311 Deallocate: Supported 00:35:22.311 Deallocated/Unwritten Error: Not Supported 00:35:22.311 Deallocated Read Value: Unknown 00:35:22.311 Deallocate in Write Zeroes: Not Supported 00:35:22.311 Deallocated Guard Field: 0xFFFF 00:35:22.311 Flush: Supported 00:35:22.311 Reservation: Not Supported 00:35:22.311 Namespace Sharing Capabilities: Multiple Controllers 00:35:22.311 Size (in LBAs): 1953525168 (931GiB) 00:35:22.311 Capacity (in LBAs): 1953525168 (931GiB) 00:35:22.311 Utilization (in LBAs): 1953525168 (931GiB) 00:35:22.311 UUID: 9af8212d-62bd-4797-ba46-7d9bf726e40e 00:35:22.311 Thin Provisioning: Not Supported 00:35:22.311 Per-NS Atomic Units: Yes 00:35:22.311 Atomic Boundary Size (Normal): 0 00:35:22.311 Atomic Boundary Size (PFail): 0 00:35:22.311 Atomic Boundary Offset: 0 00:35:22.311 NGUID/EUI64 Never Reused: No 00:35:22.311 ANA group ID: 1 00:35:22.311 Namespace Write Protected: No 00:35:22.311 Number of LBA Formats: 1 00:35:22.311 Current LBA Format: LBA Format #00 00:35:22.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:22.311 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:22.311 rmmod nvme_tcp 00:35:22.311 rmmod nvme_fabrics 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:22.311 08:31:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:24.218 08:31:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.591 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:25.591 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:25.591 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:26.525 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:26.784 00:35:26.784 real 0m9.220s 00:35:26.784 user 0m1.920s 00:35:26.784 sys 0m3.286s 00:35:26.784 08:31:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:26.784 08:31:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.784 ************************************ 00:35:26.784 END TEST nvmf_identify_kernel_target 00:35:26.784 ************************************ 00:35:26.784 08:31:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:26.784 08:31:36 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:26.784 08:31:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:26.784 08:31:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:26.784 08:31:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.784 ************************************ 00:35:26.784 START TEST nvmf_auth_host 00:35:26.784 ************************************ 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:26.784 * Looking for test storage... 00:35:26.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.784 08:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:28.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:28.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.687 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:28.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:28.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.688 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:28.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:35:28.945 00:35:28.945 --- 10.0.0.2 ping statistics --- 00:35:28.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.945 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:35:28.945 00:35:28.945 --- 10.0.0.1 ping statistics --- 00:35:28.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.945 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=64780 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 64780 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 64780 ']' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:28.945 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2a0729a898c663fd160e4506ace23f7 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Cdd 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2a0729a898c663fd160e4506ace23f7 0 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2a0729a898c663fd160e4506ace23f7 0 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2a0729a898c663fd160e4506ace23f7 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.202 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Cdd 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Cdd 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Cdd 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bd62c0a0831dab40cb057759a3fbee2fb0dc3ae297d6a2b0dec698307c4d0ae7 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xen 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bd62c0a0831dab40cb057759a3fbee2fb0dc3ae297d6a2b0dec698307c4d0ae7 3 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bd62c0a0831dab40cb057759a3fbee2fb0dc3ae297d6a2b0dec698307c4d0ae7 3 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bd62c0a0831dab40cb057759a3fbee2fb0dc3ae297d6a2b0dec698307c4d0ae7 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xen 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xen 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xen 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb6387956868243b5d9f06af259edc10ae77341f9157a62b 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gUs 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb6387956868243b5d9f06af259edc10ae77341f9157a62b 0 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb6387956868243b5d9f06af259edc10ae77341f9157a62b 0 00:35:29.460 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb6387956868243b5d9f06af259edc10ae77341f9157a62b 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gUs 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gUs 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gUs 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=495d8599925d4194e00d6cd7638a3596ffc1f85bab00d659 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iyU 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 495d8599925d4194e00d6cd7638a3596ffc1f85bab00d659 2 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 495d8599925d4194e00d6cd7638a3596ffc1f85bab00d659 2 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=495d8599925d4194e00d6cd7638a3596ffc1f85bab00d659 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:29.461 08:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iyU 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iyU 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iyU 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d76fc55c4d6add0585fd0aa4c1dffff 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.g0h 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d76fc55c4d6add0585fd0aa4c1dffff 1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d76fc55c4d6add0585fd0aa4c1dffff 1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d76fc55c4d6add0585fd0aa4c1dffff 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.g0h 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.g0h 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.g0h 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c93cb6b6df5a2f6af94b0d5f45a1823 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uO2 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c93cb6b6df5a2f6af94b0d5f45a1823 1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c93cb6b6df5a2f6af94b0d5f45a1823 1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c93cb6b6df5a2f6af94b0d5f45a1823 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:29.461 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uO2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uO2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uO2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d5bf778cfdffae91e63312beea15deaf5cb0f6a6c47782d 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cCU 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d5bf778cfdffae91e63312beea15deaf5cb0f6a6c47782d 2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d5bf778cfdffae91e63312beea15deaf5cb0f6a6c47782d 2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d5bf778cfdffae91e63312beea15deaf5cb0f6a6c47782d 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cCU 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cCU 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cCU 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.719 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=479cb1c6358654866bf0128c6aaf94b2 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bBn 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 479cb1c6358654866bf0128c6aaf94b2 0 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 479cb1c6358654866bf0128c6aaf94b2 0 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=479cb1c6358654866bf0128c6aaf94b2 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bBn 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bBn 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.bBn 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8c50456dc93ce040ea37400c0d1b9750083ba3f54fa714f3b5486f62e4b9811 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TbT 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8c50456dc93ce040ea37400c0d1b9750083ba3f54fa714f3b5486f62e4b9811 3 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8c50456dc93ce040ea37400c0d1b9750083ba3f54fa714f3b5486f62e4b9811 3 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8c50456dc93ce040ea37400c0d1b9750083ba3f54fa714f3b5486f62e4b9811 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TbT 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TbT 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TbT 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 64780 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 64780 ']' 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.720 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Cdd 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xen ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xen 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gUs 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iyU ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iyU 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.g0h 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uO2 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uO2 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cCU 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.bBn ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.bBn 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TbT 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:29.978 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:30.237 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:30.237 08:31:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:31.169 Waiting for block devices as requested 00:35:31.169 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:31.169 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:31.428 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:31.428 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:31.687 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:31.687 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:31.687 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:31.687 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:31.944 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:31.944 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:31.944 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:31.944 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:32.201 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:32.201 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:32.201 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:32.201 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:32.458 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:32.718 No valid GPT data, bailing 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:32.718 00:35:32.718 Discovery Log Number of Records 2, Generation counter 2 00:35:32.718 =====Discovery Log Entry 0====== 00:35:32.718 trtype: tcp 00:35:32.718 adrfam: ipv4 00:35:32.718 subtype: current discovery subsystem 00:35:32.718 treq: not specified, sq flow control disable supported 00:35:32.718 portid: 1 00:35:32.718 trsvcid: 4420 00:35:32.718 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:32.718 traddr: 10.0.0.1 00:35:32.718 eflags: none 00:35:32.718 sectype: none 00:35:32.718 =====Discovery Log Entry 1====== 00:35:32.718 trtype: tcp 00:35:32.718 adrfam: ipv4 00:35:32.718 subtype: nvme subsystem 00:35:32.718 treq: not specified, sq flow control disable supported 00:35:32.718 portid: 1 00:35:32.718 trsvcid: 4420 00:35:32.718 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:32.718 traddr: 10.0.0.1 00:35:32.718 eflags: none 00:35:32.718 sectype: none 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.718 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.719 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.979 nvme0n1 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.979 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.238 nvme0n1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.238 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.497 nvme0n1 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.497 08:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.497 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.498 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.757 nvme0n1 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.757 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.017 nvme0n1 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.017 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.275 nvme0n1 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.275 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 nvme0n1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.535 08:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.795 nvme0n1 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.795 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.055 nvme0n1 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.055 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.056 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.056 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.056 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.315 nvme0n1 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.315 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.582 nvme0n1 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.582 08:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.582 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.583 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.882 nvme0n1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.882 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.142 nvme0n1 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.142 08:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.401 nvme0n1 00:35:36.401 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.401 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.401 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.401 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.401 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.660 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.920 nvme0n1 00:35:36.920 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.920 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.920 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.920 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.921 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.182 nvme0n1 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:37.182 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.183 08:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.752 nvme0n1 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.752 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.753 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.753 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.753 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.320 nvme0n1 00:35:38.320 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.320 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.321 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.321 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.321 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.321 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.578 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.579 08:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.144 nvme0n1 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.144 08:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.711 nvme0n1 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.711 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.279 nvme0n1 00:35:40.279 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.279 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.279 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.279 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.279 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.280 08:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.216 nvme0n1 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.216 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.217 08:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.152 nvme0n1 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.152 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.410 08:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.341 nvme0n1 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.341 08:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.280 nvme0n1 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.280 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.281 08:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 nvme0n1 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.214 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.471 nvme0n1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.471 08:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.472 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.472 08:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 nvme0n1 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.472 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.729 nvme0n1 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.729 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.989 nvme0n1 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.989 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.249 nvme0n1 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.249 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.250 08:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.509 nvme0n1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.509 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.769 nvme0n1 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.769 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.770 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.028 nvme0n1 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:47.028 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.029 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.288 nvme0n1 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.288 08:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 nvme0n1 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.548 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.117 nvme0n1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.117 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.376 nvme0n1 00:35:48.376 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.377 08:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.637 nvme0n1 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.637 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.203 nvme0n1 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.203 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.489 nvme0n1 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.489 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.490 08:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.087 nvme0n1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.087 08:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.656 nvme0n1 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.656 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.657 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.224 nvme0n1 00:35:51.224 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.224 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.224 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.224 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.225 08:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.790 nvme0n1 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.790 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.050 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.619 nvme0n1 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.619 08:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.619 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.558 nvme0n1 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.558 08:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.558 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.494 nvme0n1 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.494 08:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.431 nvme0n1 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.431 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:55.432 08:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.432 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.370 nvme0n1 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:56.370 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.371 08:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 nvme0n1 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 nvme0n1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.752 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.013 nvme0n1 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.013 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 nvme0n1 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 nvme0n1 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.272 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.532 08:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.532 nvme0n1 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.532 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.793 nvme0n1 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.793 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.052 nvme0n1 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.052 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.310 nvme0n1 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.310 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.568 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.569 08:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.569 nvme0n1 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.569 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.827 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.828 nvme0n1 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.828 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.086 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.345 nvme0n1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.345 08:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.605 nvme0n1 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.605 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.174 nvme0n1 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.174 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.433 nvme0n1 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.433 08:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.690 nvme0n1 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.690 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.253 nvme0n1 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.253 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.511 08:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.077 nvme0n1 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.077 08:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.666 nvme0n1 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.666 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.240 nvme0n1 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.240 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.241 08:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.804 nvme0n1 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJhMDcyOWE4OThjNjYzZmQxNjBlNDUwNmFjZTIzZjeq5dpK: 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ2MmMwYTA4MzFkYWI0MGNiMDU3NzU5YTNmYmVlMmZiMGRjM2FlMjk3ZDZhMmIwZGVjNjk4MzA3YzRkMGFlN8nnaZA=: 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.804 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.805 08:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.182 nvme0n1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.182 08:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.118 nvme0n1 00:36:07.118 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.118 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.118 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.118 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.118 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmQ3NmZjNTVjNGQ2YWRkMDU4NWZkMGFhNGMxZGZmZmbayVh1: 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM5M2NiNmI2ZGY1YTJmNmFmOTRiMGQ1ZjQ1YTE4MjOvTB/r: 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.119 08:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.055 nvme0n1 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1YmY3NzhjZmRmZmFlOTFlNjMzMTJiZWVhMTVkZWFmNWNiMGY2YTZjNDc3ODJkAqplMg==: 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc5Y2IxYzYzNTg2NTQ4NjZiZjAxMjhjNmFhZjk0YjJQP+uk: 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.055 08:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.994 nvme0n1 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhjNTA0NTZkYzkzY2UwNDBlYTM3NDAwYzBkMWI5NzUwMDgzYmEzZjU0ZmE3MTRmM2I1NDg2ZjYyZTRiOTgxMRoEdCg=: 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.994 08:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.931 nvme0n1 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI2Mzg3OTU2ODY4MjQzYjVkOWYwNmFmMjU5ZWRjMTBhZTc3MzQxZjkxNTdhNjJi68TEWg==: 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDk1ZDg1OTk5MjVkNDE5NGUwMGQ2Y2Q3NjM4YTM1OTZmZmMxZjg1YmFiMDBkNjU5JSwRTw==: 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.931 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.190 request: 00:36:10.190 { 00:36:10.190 "name": "nvme0", 00:36:10.190 "trtype": "tcp", 00:36:10.190 "traddr": "10.0.0.1", 00:36:10.190 "adrfam": "ipv4", 00:36:10.190 "trsvcid": "4420", 00:36:10.190 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:10.190 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:10.190 "prchk_reftag": false, 00:36:10.190 "prchk_guard": false, 00:36:10.190 "hdgst": false, 00:36:10.190 "ddgst": false, 00:36:10.190 "method": "bdev_nvme_attach_controller", 00:36:10.190 "req_id": 1 00:36:10.190 } 00:36:10.190 Got JSON-RPC error response 00:36:10.190 response: 00:36:10.190 { 00:36:10.190 "code": -5, 00:36:10.190 "message": "Input/output error" 00:36:10.190 } 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:10.190 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.191 request: 00:36:10.191 { 00:36:10.191 "name": "nvme0", 00:36:10.191 "trtype": "tcp", 00:36:10.191 "traddr": "10.0.0.1", 00:36:10.191 "adrfam": "ipv4", 00:36:10.191 "trsvcid": "4420", 00:36:10.191 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:10.191 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:10.191 "prchk_reftag": false, 00:36:10.191 "prchk_guard": false, 00:36:10.191 "hdgst": false, 00:36:10.191 "ddgst": false, 00:36:10.191 "dhchap_key": "key2", 00:36:10.191 "method": "bdev_nvme_attach_controller", 00:36:10.191 "req_id": 1 00:36:10.191 } 00:36:10.191 Got JSON-RPC error response 00:36:10.191 response: 00:36:10.191 { 00:36:10.191 "code": -5, 00:36:10.191 "message": "Input/output error" 00:36:10.191 } 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.191 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.449 request: 00:36:10.449 { 00:36:10.449 "name": "nvme0", 00:36:10.449 "trtype": "tcp", 00:36:10.449 "traddr": "10.0.0.1", 00:36:10.449 "adrfam": "ipv4", 00:36:10.449 "trsvcid": "4420", 00:36:10.449 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:10.449 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:10.449 "prchk_reftag": false, 00:36:10.449 "prchk_guard": false, 00:36:10.449 "hdgst": false, 00:36:10.449 "ddgst": false, 00:36:10.449 "dhchap_key": "key1", 00:36:10.449 "dhchap_ctrlr_key": "ckey2", 00:36:10.449 "method": "bdev_nvme_attach_controller", 00:36:10.449 "req_id": 1 00:36:10.449 } 00:36:10.449 Got JSON-RPC error response 00:36:10.449 response: 00:36:10.449 { 00:36:10.449 "code": -5, 00:36:10.449 "message": "Input/output error" 00:36:10.449 } 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:10.449 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:10.449 rmmod nvme_tcp 00:36:10.449 rmmod nvme_fabrics 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 64780 ']' 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 64780 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 64780 ']' 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 64780 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64780 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64780' 00:36:10.450 killing process with pid 64780 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 64780 00:36:10.450 08:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 64780 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:10.708 08:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:12.613 08:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:13.990 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.990 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.990 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.928 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:14.928 08:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Cdd /tmp/spdk.key-null.gUs /tmp/spdk.key-sha256.g0h /tmp/spdk.key-sha384.cCU /tmp/spdk.key-sha512.TbT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:14.928 08:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:15.860 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:15.860 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:15.860 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:15.860 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:15.860 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:15.860 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:15.860 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:15.860 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:15.860 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:15.860 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:15.860 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:15.860 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:16.118 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:16.118 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:16.118 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:16.118 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:16.118 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:16.118 00:36:16.118 real 0m49.471s 00:36:16.118 user 0m47.310s 00:36:16.118 sys 0m5.611s 00:36:16.119 08:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.119 08:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.119 ************************************ 00:36:16.119 END TEST nvmf_auth_host 00:36:16.119 ************************************ 00:36:16.119 08:32:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:16.119 08:32:25 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:36:16.119 08:32:25 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.119 08:32:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:16.119 08:32:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:16.119 08:32:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.119 ************************************ 00:36:16.119 START TEST nvmf_digest 00:36:16.119 ************************************ 00:36:16.119 08:32:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.377 * Looking for test storage... 00:36:16.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:16.377 08:32:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:18.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:18.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.282 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:18.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:18.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.283 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:18.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:36:18.543 00:36:18.543 --- 10.0.0.2 ping statistics --- 00:36:18.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.543 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:36:18.543 00:36:18.543 --- 10.0.0.1 ping statistics --- 00:36:18.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.543 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:18.543 08:32:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 ************************************ 00:36:18.543 START TEST nvmf_digest_clean 00:36:18.543 ************************************ 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=74220 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 74220 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 74220 ']' 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:18.543 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.543 [2024-07-21 08:32:28.080873] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:18.543 [2024-07-21 08:32:28.080964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.543 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.543 [2024-07-21 08:32:28.157590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.804 [2024-07-21 08:32:28.254646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.804 [2024-07-21 08:32:28.254715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.804 [2024-07-21 08:32:28.254731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.804 [2024-07-21 08:32:28.254744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.804 [2024-07-21 08:32:28.254755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.804 [2024-07-21 08:32:28.254786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.804 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.099 null0 00:36:19.099 [2024-07-21 08:32:28.471357] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.099 [2024-07-21 08:32:28.495579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=74245 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 74245 /var/tmp/bperf.sock 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 74245 ']' 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:19.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:19.099 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.099 [2024-07-21 08:32:28.544900] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:19.099 [2024-07-21 08:32:28.544975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74245 ] 00:36:19.099 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.099 [2024-07-21 08:32:28.611170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.099 [2024-07-21 08:32:28.703844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.357 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:19.357 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:19.357 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:19.357 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:19.357 08:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:19.615 08:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.615 08:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.888 nvme0n1 00:36:19.888 08:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:19.888 08:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.145 Running I/O for 2 seconds... 00:36:22.050 00:36:22.050 Latency(us) 00:36:22.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.050 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:22.050 nvme0n1 : 2.01 17648.62 68.94 0.00 0.00 7240.91 4029.25 16990.81 00:36:22.050 =================================================================================================================== 00:36:22.050 Total : 17648.62 68.94 0.00 0.00 7240.91 4029.25 16990.81 00:36:22.050 0 00:36:22.050 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:22.050 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:22.050 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:22.050 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:22.050 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:22.050 | select(.opcode=="crc32c") 00:36:22.050 | "\(.module_name) \(.executed)"' 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 74245 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 74245 ']' 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 74245 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74245 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74245' 00:36:22.308 killing process with pid 74245 00:36:22.308 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 74245 00:36:22.309 Received shutdown signal, test time was about 2.000000 seconds 00:36:22.309 00:36:22.309 Latency(us) 00:36:22.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.309 =================================================================================================================== 00:36:22.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.309 08:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 74245 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=74651 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 74651 /var/tmp/bperf.sock 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 74651 ']' 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:22.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:22.567 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:22.567 [2024-07-21 08:32:32.082223] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:22.567 [2024-07-21 08:32:32.082318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74651 ] 00:36:22.567 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:22.567 Zero copy mechanism will not be used. 00:36:22.567 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.567 [2024-07-21 08:32:32.143714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.824 [2024-07-21 08:32:32.234819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.824 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.824 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:22.824 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.824 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.824 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.082 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.082 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.341 nvme0n1 00:36:23.600 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.600 08:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.600 Zero copy mechanism will not be used. 00:36:23.600 Running I/O for 2 seconds... 00:36:25.503 00:36:25.503 Latency(us) 00:36:25.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.503 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:25.503 nvme0n1 : 2.00 4949.82 618.73 0.00 0.00 3228.07 837.40 11408.12 00:36:25.503 =================================================================================================================== 00:36:25.503 Total : 4949.82 618.73 0.00 0.00 3228.07 837.40 11408.12 00:36:25.503 0 00:36:25.503 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:25.503 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:25.504 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:25.504 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:25.504 | select(.opcode=="crc32c") 00:36:25.504 | "\(.module_name) \(.executed)"' 00:36:25.504 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 74651 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 74651 ']' 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 74651 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:25.762 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74651 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74651' 00:36:26.020 killing process with pid 74651 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 74651 00:36:26.020 Received shutdown signal, test time was about 2.000000 seconds 00:36:26.020 00:36:26.020 Latency(us) 00:36:26.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.020 =================================================================================================================== 00:36:26.020 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 74651 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=75062 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 75062 /var/tmp/bperf.sock 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 75062 ']' 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.020 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.279 [2024-07-21 08:32:35.672035] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:26.279 [2024-07-21 08:32:35.672123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75062 ] 00:36:26.280 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.280 [2024-07-21 08:32:35.731843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.280 [2024-07-21 08:32:35.820519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.280 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:26.280 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:26.280 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:26.280 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:26.280 08:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:26.848 08:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:26.848 08:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:27.107 nvme0n1 00:36:27.367 08:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:27.367 08:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:27.367 Running I/O for 2 seconds... 00:36:29.278 00:36:29.278 Latency(us) 00:36:29.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.278 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:29.278 nvme0n1 : 2.01 20511.61 80.12 0.00 0.00 6225.69 3301.07 15243.19 00:36:29.278 =================================================================================================================== 00:36:29.278 Total : 20511.61 80.12 0.00 0.00 6225.69 3301.07 15243.19 00:36:29.278 0 00:36:29.278 08:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:29.278 08:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:29.278 08:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:29.278 08:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:29.278 08:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:29.278 | select(.opcode=="crc32c") 00:36:29.278 | "\(.module_name) \(.executed)"' 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 75062 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 75062 ']' 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 75062 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.536 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75062 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75062' 00:36:29.793 killing process with pid 75062 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 75062 00:36:29.793 Received shutdown signal, test time was about 2.000000 seconds 00:36:29.793 00:36:29.793 Latency(us) 00:36:29.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.793 =================================================================================================================== 00:36:29.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 75062 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=75582 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 75582 /var/tmp/bperf.sock 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 75582 ']' 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:29.793 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:30.051 [2024-07-21 08:32:39.442573] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:30.051 [2024-07-21 08:32:39.442668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75582 ] 00:36:30.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:30.051 Zero copy mechanism will not be used. 00:36:30.051 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.051 [2024-07-21 08:32:39.503276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.051 [2024-07-21 08:32:39.595150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.309 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.309 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:30.309 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:30.309 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:30.309 08:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.566 08:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.566 08:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.823 nvme0n1 00:36:30.823 08:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:30.823 08:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:31.081 Zero copy mechanism will not be used. 00:36:31.081 Running I/O for 2 seconds... 00:36:32.989 00:36:32.989 Latency(us) 00:36:32.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.989 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:32.990 nvme0n1 : 2.00 4337.99 542.25 0.00 0.00 3679.93 2767.08 8495.41 00:36:32.990 =================================================================================================================== 00:36:32.990 Total : 4337.99 542.25 0.00 0.00 3679.93 2767.08 8495.41 00:36:32.990 0 00:36:32.990 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:32.990 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:32.990 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:32.990 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:32.990 | select(.opcode=="crc32c") 00:36:32.990 | "\(.module_name) \(.executed)"' 00:36:32.990 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 75582 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 75582 ']' 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 75582 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75582 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75582' 00:36:33.248 killing process with pid 75582 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 75582 00:36:33.248 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.248 00:36:33.248 Latency(us) 00:36:33.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.248 =================================================================================================================== 00:36:33.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.248 08:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 75582 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 74220 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 74220 ']' 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 74220 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74220 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74220' 00:36:33.515 killing process with pid 74220 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 74220 00:36:33.515 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 74220 00:36:33.773 00:36:33.773 real 0m15.242s 00:36:33.773 user 0m30.022s 00:36:33.773 sys 0m4.341s 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.773 ************************************ 00:36:33.773 END TEST nvmf_digest_clean 00:36:33.773 ************************************ 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.773 ************************************ 00:36:33.773 START TEST nvmf_digest_error 00:36:33.773 ************************************ 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=76017 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 76017 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 76017 ']' 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.773 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.774 [2024-07-21 08:32:43.374711] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:33.774 [2024-07-21 08:32:43.374799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.033 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.033 [2024-07-21 08:32:43.443400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.033 [2024-07-21 08:32:43.531128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.033 [2024-07-21 08:32:43.531189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.033 [2024-07-21 08:32:43.531215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.033 [2024-07-21 08:32:43.531230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.033 [2024-07-21 08:32:43.531241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.033 [2024-07-21 08:32:43.531272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.033 [2024-07-21 08:32:43.607864] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.033 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 null0 00:36:34.348 [2024-07-21 08:32:43.727069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.348 [2024-07-21 08:32:43.751284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=76150 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 76150 /var/tmp/bperf.sock 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 76150 ']' 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.348 08:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.348 [2024-07-21 08:32:43.800377] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:34.348 [2024-07-21 08:32:43.800455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76150 ] 00:36:34.348 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.348 [2024-07-21 08:32:43.865909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.605 [2024-07-21 08:32:43.957246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.605 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.605 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:34.605 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:34.605 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.862 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.119 nvme0n1 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:35.120 08:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:35.120 Running I/O for 2 seconds... 00:36:35.377 [2024-07-21 08:32:44.759480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.759530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.759558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.777810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.777841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.777858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.789020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.789058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.789081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.806626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.806704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.822942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.823008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.823052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.836475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.836520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.836537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.854217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.854254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.866251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.866288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.866312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.883534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.883571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.883601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.899544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.899580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.899600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.916492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.916551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.916580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.929167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.929202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.929228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.946741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.946771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.946788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.377 [2024-07-21 08:32:44.958552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.377 [2024-07-21 08:32:44.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.377 [2024-07-21 08:32:44.958610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.378 [2024-07-21 08:32:44.972981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.378 [2024-07-21 08:32:44.973017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.378 [2024-07-21 08:32:44.973037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.378 [2024-07-21 08:32:44.986493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.378 [2024-07-21 08:32:44.986528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.378 [2024-07-21 08:32:44.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.378 [2024-07-21 08:32:44.999060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.378 [2024-07-21 08:32:44.999097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.378 [2024-07-21 08:32:44.999117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.013370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.013406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.013430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.025738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.025767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.025791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.040435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.040471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.040500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.053174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.053210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.053229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.067113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.067148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.067168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.080909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.080956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.096090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.096152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.113307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.113347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.113368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.125200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.125231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.125248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.142262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.142299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.142318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.158393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.637 [2024-07-21 08:32:45.158443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.637 [2024-07-21 08:32:45.158502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.637 [2024-07-21 08:32:45.171195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.171231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.171251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.184633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.184695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.184712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.198003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.198039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.198059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.211261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.211296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.211316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.226707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.226737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.226753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.244068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.244121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.244153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.638 [2024-07-21 08:32:45.257306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.638 [2024-07-21 08:32:45.257351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.638 [2024-07-21 08:32:45.257369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.273944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.273974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.274009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.285667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.285711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.285727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.302549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.302585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.302605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.315877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.315908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.315944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.329049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.329122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.340944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.340980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.897 [2024-07-21 08:32:45.341006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.897 [2024-07-21 08:32:45.356963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.897 [2024-07-21 08:32:45.357003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.357024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.371425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.371462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.385832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.385862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.385878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.399861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.399896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.399928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.412816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.412851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.412871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.424832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.424871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.424889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.439447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.439483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.454669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.454704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.454724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.472281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.472330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.472352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.485768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.485799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.485816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.502274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.502311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.502330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:35.898 [2024-07-21 08:32:45.520311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:35.898 [2024-07-21 08:32:45.520347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.898 [2024-07-21 08:32:45.520367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.531491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.531528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.548501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.548543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.548563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.564169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.564276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.576210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.576246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.576266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.590932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.590967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.590987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.609208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.609248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.624956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.625014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.625049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.638061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.638098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.638118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.655903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.655933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.655949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.672060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.157 [2024-07-21 08:32:45.672097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.157 [2024-07-21 08:32:45.672117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.157 [2024-07-21 08:32:45.689373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.689431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.689470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.701478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.701514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.701533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.720648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.720678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.720695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.737210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.737246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.748213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.748269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.764043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.764080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.764099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.158 [2024-07-21 08:32:45.780020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.158 [2024-07-21 08:32:45.780056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.158 [2024-07-21 08:32:45.780076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.416 [2024-07-21 08:32:45.793425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.416 [2024-07-21 08:32:45.793462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.416 [2024-07-21 08:32:45.793483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.416 [2024-07-21 08:32:45.809173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.416 [2024-07-21 08:32:45.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.416 [2024-07-21 08:32:45.809233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.416 [2024-07-21 08:32:45.823657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.416 [2024-07-21 08:32:45.823706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.836253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.836289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.836310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.849767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.849799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.849816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.861902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.861957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.861977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.877318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.877355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.877374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.891292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.891340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.891361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.903393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.903429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.903450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.916856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.916919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.930179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.930215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.930234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.946434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.946471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.946491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.958721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.958760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.958777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.972949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.972985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.973013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.987108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.987150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.987171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:45.999556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:45.999587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:45.999604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:46.015095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:46.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:46.015210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:46.028065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:46.028109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:46.028130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.417 [2024-07-21 08:32:46.040301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.417 [2024-07-21 08:32:46.040337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.417 [2024-07-21 08:32:46.040357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.054528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.054564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.054584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.068060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.068096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.068116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.082370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.082400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.082418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.095414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.095471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.095513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.110297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.110328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.110349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.127326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.127386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.127417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.138663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.138694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.138712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.153873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.153903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.166932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.166961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.167004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.179256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.179314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.193760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.193791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.206946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.206981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.207001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.221907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.221996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.222048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.234623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.234658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.234692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.250958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.250993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.251013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.264556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.264592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.264611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.281251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.281287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.281307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.674 [2024-07-21 08:32:46.297344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.674 [2024-07-21 08:32:46.297379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.674 [2024-07-21 08:32:46.297410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.309686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.309736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.309754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.322957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.323005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.337422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.337452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.337481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.351734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.351765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.351782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.363459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.363495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.363514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.377936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.378001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.390804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.390854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.407232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.407267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.420342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.420378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.420397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.435248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.435296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.435315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.450450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.450486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.450506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.463415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.463464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.463492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.476249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.476285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.476305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.488687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.488716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.488734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.502608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.502651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.502686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.517467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.517517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.517539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.530861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.530892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.530926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.544696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.544728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.544759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.931 [2024-07-21 08:32:46.557287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:36.931 [2024-07-21 08:32:46.557322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.931 [2024-07-21 08:32:46.557342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.571772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.586780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.586811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.586832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.602185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.602221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.602258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.614826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.614857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.614875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.631824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.631857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.631894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.647337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.647426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.660419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.660454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.660477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.675322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.675358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.675383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.688865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.688897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.688915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.705458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.705499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.705520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.718319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.718372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.718403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.732292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.732327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 [2024-07-21 08:32:46.744561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19eba20) 00:36:37.189 [2024-07-21 08:32:46.744595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.189 [2024-07-21 08:32:46.744624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.189 00:36:37.189 Latency(us) 00:36:37.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:37.189 nvme0n1 : 2.01 17726.94 69.25 0.00 0.00 7210.62 3883.61 23981.32 00:36:37.189 =================================================================================================================== 00:36:37.189 Total : 17726.94 69.25 0.00 0.00 7210.62 3883.61 23981.32 00:36:37.189 0 00:36:37.189 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:37.189 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:37.189 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:37.189 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:37.189 | .driver_specific 00:36:37.189 | .nvme_error 00:36:37.189 | .status_code 00:36:37.189 | .command_transient_transport_error' 00:36:37.446 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:36:37.446 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 76150 00:36:37.446 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 76150 ']' 00:36:37.446 08:32:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 76150 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76150 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76150' 00:36:37.446 killing process with pid 76150 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 76150 00:36:37.446 Received shutdown signal, test time was about 2.000000 seconds 00:36:37.446 00:36:37.446 Latency(us) 00:36:37.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.446 =================================================================================================================== 00:36:37.446 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.446 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 76150 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=76566 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 76566 /var/tmp/bperf.sock 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 76566 ']' 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.704 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.704 [2024-07-21 08:32:47.286425] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:37.704 [2024-07-21 08:32:47.286510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76566 ] 00:36:37.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:37.704 Zero copy mechanism will not be used. 00:36:37.704 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.961 [2024-07-21 08:32:47.349695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.961 [2024-07-21 08:32:47.439283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.961 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:37.961 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:37.961 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:37.961 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:38.218 08:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:38.785 nvme0n1 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:38.785 08:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:38.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:38.785 Zero copy mechanism will not be used. 00:36:38.785 Running I/O for 2 seconds... 00:36:38.785 [2024-07-21 08:32:48.349349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.349414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.349437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.355890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.355951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.355981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.362247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.362282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.368562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.368597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.368639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.374950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.374986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.375010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.381221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.381255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.381278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.387710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.387742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.387765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.394090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.394124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.394146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.399856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.399901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.399918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.406337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.406370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.406390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.785 [2024-07-21 08:32:48.412680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:38.785 [2024-07-21 08:32:48.412710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.785 [2024-07-21 08:32:48.412731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.419055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.419089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.419119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.425359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.425393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.425416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.431726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.431771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.431795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.438100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.438134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.438170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.444445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.444481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.444512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.450705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.450736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.450753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.457103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.457137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.457155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.463542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.463575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.463595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.470055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.470088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.470109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.476491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.476524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.476545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.482815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.482859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.482877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.489222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.489254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.489273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.495512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.495546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.495564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.501959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.501993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.502020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.508320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.508352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.508373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.514599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.520986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.521035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.521055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.527529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.527564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.527583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.533735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.533764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.533787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.540088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.540146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.546439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.546473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.546498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.552891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.552935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.552952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.559267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.559301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.559320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.565551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.044 [2024-07-21 08:32:48.565584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.044 [2024-07-21 08:32:48.565604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.044 [2024-07-21 08:32:48.571878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.571943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.578192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.578226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.578245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.584457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.584490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.584509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.590796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.590825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.590844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.597278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.597312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.597332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.603381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.603419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.603441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.609713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.609743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.609764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.616152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.616207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.622332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.622365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.622384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.628626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.628658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.628691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.634901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.634929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.634963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.641198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.641231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.641249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.647280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.647313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.647333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.653645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.653692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.653721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.660115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.660150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.660169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.666380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.666412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.666432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.045 [2024-07-21 08:32:48.672725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.045 [2024-07-21 08:32:48.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.045 [2024-07-21 08:32:48.672775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.679357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.679392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.679415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.685578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.685628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.685673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.691989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.692023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.692044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.698332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.698365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.698384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.704551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.704584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.704602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.710686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.710714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.710745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.716836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.716865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.716887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.723181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.723214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.723233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.729411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.729445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.729465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.735682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.735711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.735730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.741890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.741920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.741954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.748213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.748246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.748265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.754362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.754413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.760494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.760527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.760547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.766839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.766889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.766906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.773411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.773446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.773466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.779827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.779858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.779877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.786230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.786264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.792407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.792441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.303 [2024-07-21 08:32:48.792461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.303 [2024-07-21 08:32:48.798434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.303 [2024-07-21 08:32:48.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.798489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.804937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.804985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.805004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.811263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.811296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.811314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.816733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.816781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.822947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.822980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.823000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.829240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.829272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.835501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.835534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.835556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.841824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.841853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.841871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.848035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.848068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.848087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.854249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.854281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.854301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.860564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.860608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.860638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.866988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.867022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.867041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.873290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.873323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.873353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.879701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.879749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.885957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.885990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.886008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.892261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.892294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.892313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.898533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.898566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.898584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.904581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.904630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.904650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.910849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.910898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.916899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.916948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.916967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.923100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.923152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.304 [2024-07-21 08:32:48.930675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.304 [2024-07-21 08:32:48.930705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.304 [2024-07-21 08:32:48.930737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.939000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.939035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.939055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.947353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.947388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.947408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.955985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.956040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.964085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.964119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.964138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.972099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.972133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.972153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.979722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.979754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.979771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.988002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.988036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.988055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:48.995924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:48.995973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:48.996001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.003990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.004025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.004045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.012351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.012385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.012408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.020741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.020806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.029207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.029241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.029260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.037340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.037375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.037394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.045415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.045450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.045470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.052703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.059761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.059806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.059825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.066404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.066443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.066463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.072588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.072650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.078872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.078903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.561 [2024-07-21 08:32:49.078920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.561 [2024-07-21 08:32:49.085001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.561 [2024-07-21 08:32:49.085035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.091286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.091319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.091338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.097498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.097531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.097552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.103852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.103882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.103901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.110020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.110054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.110073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.115879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.115935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.115962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.121994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.122027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.122047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.128368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.128401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.128419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.134644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.134690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.134707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.141049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.141082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.141101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.147265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.147298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.147317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.153417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.153451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.153470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.159802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.159833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.159851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.166333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.166368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.166387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.172789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.172864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.179209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.179242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.179260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.562 [2024-07-21 08:32:49.185493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.562 [2024-07-21 08:32:49.185526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.562 [2024-07-21 08:32:49.185546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.192307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.192343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.192366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.198530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.198564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.198584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.204887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.204958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.211151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.211185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.211204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.217365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.217399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.217417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.223981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.224016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.224035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.230257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.230298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.230318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.236492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.236525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.236545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.242978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.243012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.243031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.249283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.249316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.249334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.255474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.255507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.255526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.261722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.261753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.261770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.268161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.268196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.268216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.274473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.274507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.274526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.821 [2024-07-21 08:32:49.280916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.821 [2024-07-21 08:32:49.280945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.821 [2024-07-21 08:32:49.280979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.287322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.287356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.287375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.293762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.293792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.293823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.300063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.300096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.300115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.306473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.306507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.306527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.312814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.312844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.312861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.318983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.319016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.319035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.325160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.325213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.331326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.331359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.331377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.337475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.337509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.337535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.343697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.343727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.343744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.349868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.349899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.349932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.356062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.356096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.356115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.362327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.362360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.362379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.368633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.368682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.368699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.375118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.375153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.381447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.387868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.387899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.387932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.394164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.394203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.400402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.400454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.406723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.406770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.413078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.413112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.413131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.419191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.419225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.425509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.425542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.425561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.431986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.432020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.432038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.438196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.438229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.438248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.822 [2024-07-21 08:32:49.444501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:39.822 [2024-07-21 08:32:49.444535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.822 [2024-07-21 08:32:49.444554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.450905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.450937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.457205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.457240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.457258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.463335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.463368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.463387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.469656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.469712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.469730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.476083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.476117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.476137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.482393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.488565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.495048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.495082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.495101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.501291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.501324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.501349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.507689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.507733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.507751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.514001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.514034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.514053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.520332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.520365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.520384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.526539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.526573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.526592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.532861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.532891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.532908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.539140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.539193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.084 [2024-07-21 08:32:49.545377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.084 [2024-07-21 08:32:49.545411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.084 [2024-07-21 08:32:49.545430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.551493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.551527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.551546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.557653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.557705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.557724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.564090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.564124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.564143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.570449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.570482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.570502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.576958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.576991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.577010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.583320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.583353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.583372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.589496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.589530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.595777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.595807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.595824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.601881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.601911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.601947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.608119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.608153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.608172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.614345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.614378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.620593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.620636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.620671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.626852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.626920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.633234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.633267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.633286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.639684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.639728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.639745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.646077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.646110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.646129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.652324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.652376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.658547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.664957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.665007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.665039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.671210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.671264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.677529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.677562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.677582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.683897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.683945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.683965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.690222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.690257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.690276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.696673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.696703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.696719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.702987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.703020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.703039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.085 [2024-07-21 08:32:49.709306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.085 [2024-07-21 08:32:49.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.085 [2024-07-21 08:32:49.709358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.346 [2024-07-21 08:32:49.715844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.346 [2024-07-21 08:32:49.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.346 [2024-07-21 08:32:49.715909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.346 [2024-07-21 08:32:49.722124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.346 [2024-07-21 08:32:49.722159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.346 [2024-07-21 08:32:49.722178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.346 [2024-07-21 08:32:49.728251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.346 [2024-07-21 08:32:49.728284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.346 [2024-07-21 08:32:49.728303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.346 [2024-07-21 08:32:49.734443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.346 [2024-07-21 08:32:49.734476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.734495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.740759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.740789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.740806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.747125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.747159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.747179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.753212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.753246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.753265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.759702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.759749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.759766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.766310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.766344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.766364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.772577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.772611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.772646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.778959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.785192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.785229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.785248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.791550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.791583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.791602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.798260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.798295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.798314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.804501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.804534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.804553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.811031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.811065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.811084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.817270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.817302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.817321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.823403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.823436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.823455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.829870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.829921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.829939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.836088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.836120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.842370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.842404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.842423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.848576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.848610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.848638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.854776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.854806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.861086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.861140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.867401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.867434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.867452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.873806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.873838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.873873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.880102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.880136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.880155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.886433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.347 [2024-07-21 08:32:49.886486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.347 [2024-07-21 08:32:49.892945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.347 [2024-07-21 08:32:49.892978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.892997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.899239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.899273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.899292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.905388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.905422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.905441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.911575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.911609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.911636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.917991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.918025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.918045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.924227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.924260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.924279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.930294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.930329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.930347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.936459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.942839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.942870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.942905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.949147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.949180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.949199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.955399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.955437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.955457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.961783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.961814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.961831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.967979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.968013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.968031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.348 [2024-07-21 08:32:49.974155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.348 [2024-07-21 08:32:49.974189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.348 [2024-07-21 08:32:49.974207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:49.980409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:49.980443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:49.980462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:49.987513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:49.987547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:49.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:49.995585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:49.995638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:49.995659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.003583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.003631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.003656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.007935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.007984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.008004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.015439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.015490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.015511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.023189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.023248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.031831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.031865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.031898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.038674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.038713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.038736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.044767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.044801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.044818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.051753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.051786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.051804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.059402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.059439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.059460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.067542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.067577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.067597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.076645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.076693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.076712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.084683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.084719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.084739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.092368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.092403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.092424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.100266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.100302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.100321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.108999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.109036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.109055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.117267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.117303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.117322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.124873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.124906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.124943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.131759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.131792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.131809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.139360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.139395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.610 [2024-07-21 08:32:50.139414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.610 [2024-07-21 08:32:50.145884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.610 [2024-07-21 08:32:50.145916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.145933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.152082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.152117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.152150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.158260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.158294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.164387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.164420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.164440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.170631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.170678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.170700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.176878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.176930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.176950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.183144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.183183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.183202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.189555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.189589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.189608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.196106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.196141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.196165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.202445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.202483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.202502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.208914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.208961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.208979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.215161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.215191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.215213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.220983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.221022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.221041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.227273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.227306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.227325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.611 [2024-07-21 08:32:50.233715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.611 [2024-07-21 08:32:50.233746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.611 [2024-07-21 08:32:50.233763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.240007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.240038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.240055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.246373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.246411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.246431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.252787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.252817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.252835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.259019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.259056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.259075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.265295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.265329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.265348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.271484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.271518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.271537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.277730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.277778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.283458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.283488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.289186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.289216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.289239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.294877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.294908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.294925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.300728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.300758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.300775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.306639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.306670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.306687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.312458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.312487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.312504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.869 [2024-07-21 08:32:50.318286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.869 [2024-07-21 08:32:50.318317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.869 [2024-07-21 08:32:50.318334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.870 [2024-07-21 08:32:50.324087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.870 [2024-07-21 08:32:50.324117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.870 [2024-07-21 08:32:50.324134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.870 [2024-07-21 08:32:50.330162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.870 [2024-07-21 08:32:50.330194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.870 [2024-07-21 08:32:50.330211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.870 [2024-07-21 08:32:50.336063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.870 [2024-07-21 08:32:50.336109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.870 [2024-07-21 08:32:50.336125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.870 [2024-07-21 08:32:50.341924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x75d3d0) 00:36:40.870 [2024-07-21 08:32:50.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.870 [2024-07-21 08:32:50.341979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.870 00:36:40.870 Latency(us) 00:36:40.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.870 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:40.870 nvme0n1 : 2.00 4798.27 599.78 0.00 0.00 3330.14 837.40 9126.49 00:36:40.870 =================================================================================================================== 00:36:40.870 Total : 4798.27 599.78 0.00 0.00 3330.14 837.40 9126.49 00:36:40.870 0 00:36:40.870 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:40.870 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:40.870 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:40.870 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:40.870 | .driver_specific 00:36:40.870 | .nvme_error 00:36:40.870 | .status_code 00:36:40.870 | .command_transient_transport_error' 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 309 > 0 )) 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 76566 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 76566 ']' 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 76566 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76566 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76566' 00:36:41.126 killing process with pid 76566 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 76566 00:36:41.126 Received shutdown signal, test time was about 2.000000 seconds 00:36:41.126 00:36:41.126 Latency(us) 00:36:41.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:41.126 =================================================================================================================== 00:36:41.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:41.126 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 76566 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=76980 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 76980 /var/tmp/bperf.sock 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 76980 ']' 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:41.383 08:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.383 [2024-07-21 08:32:50.921097] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:41.383 [2024-07-21 08:32:50.921194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76980 ] 00:36:41.383 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.383 [2024-07-21 08:32:50.978883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.640 [2024-07-21 08:32:51.063948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.640 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.640 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:41.640 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.640 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.897 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.465 nvme0n1 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:42.465 08:32:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.465 Running I/O for 2 seconds... 00:36:42.465 [2024-07-21 08:32:52.082989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ee5c8 00:36:42.465 [2024-07-21 08:32:52.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.465 [2024-07-21 08:32:52.084081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.096263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7970 00:36:42.725 [2024-07-21 08:32:52.097106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.097139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.110969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e88f8 00:36:42.725 [2024-07-21 08:32:52.112825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.112870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.124506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f92c0 00:36:42.725 [2024-07-21 08:32:52.126496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.126529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.133545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4de8 00:36:42.725 [2024-07-21 08:32:52.134370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.134398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.146903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e88f8 00:36:42.725 [2024-07-21 08:32:52.147911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.147955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.160211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e2c28 00:36:42.725 [2024-07-21 08:32:52.161378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.725 [2024-07-21 08:32:52.161406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:42.725 [2024-07-21 08:32:52.172205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fd640 00:36:42.725 [2024-07-21 08:32:52.173345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.173378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.186400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eee38 00:36:42.726 [2024-07-21 08:32:52.187800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.187844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.199568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3498 00:36:42.726 [2024-07-21 08:32:52.201061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.201095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.211575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df118 00:36:42.726 [2024-07-21 08:32:52.213054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.213087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.224846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5658 00:36:42.726 [2024-07-21 08:32:52.226525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.226555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.234988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df118 00:36:42.726 [2024-07-21 08:32:52.235941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.235985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.248304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f35f0 00:36:42.726 [2024-07-21 08:32:52.249443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.249471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.261585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e9e10 00:36:42.726 [2024-07-21 08:32:52.262890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.262933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.274843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ecc78 00:36:42.726 [2024-07-21 08:32:52.276332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.276361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.288127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fdeb0 00:36:42.726 [2024-07-21 08:32:52.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.289850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.301442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6020 00:36:42.726 [2024-07-21 08:32:52.303268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.303300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.313286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190efae0 00:36:42.726 [2024-07-21 08:32:52.314618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.314649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.324836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eaef0 00:36:42.726 [2024-07-21 08:32:52.326710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.326740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.338108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f4f40 00:36:42.726 [2024-07-21 08:32:52.340198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.340234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.726 [2024-07-21 08:32:52.349961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190feb58 00:36:42.726 [2024-07-21 08:32:52.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.726 [2024-07-21 08:32:52.351072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.363113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f20d8 00:36:42.984 [2024-07-21 08:32:52.364266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.364295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.375204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f8e88 00:36:42.984 [2024-07-21 08:32:52.376341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.376374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.388459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3060 00:36:42.984 [2024-07-21 08:32:52.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.389833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.401665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ed920 00:36:42.984 [2024-07-21 08:32:52.402951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.402985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.413933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fc560 00:36:42.984 [2024-07-21 08:32:52.414761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.414797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.984 [2024-07-21 08:32:52.427152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5658 00:36:42.984 [2024-07-21 08:32:52.428171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.984 [2024-07-21 08:32:52.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.439190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6cc8 00:36:42.985 [2024-07-21 08:32:52.441029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.441062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.452402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fb048 00:36:42.985 [2024-07-21 08:32:52.454415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.454448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.463251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190de8a8 00:36:42.985 [2024-07-21 08:32:52.464204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.464232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.477292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4de8 00:36:42.985 [2024-07-21 08:32:52.478455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.478483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.490433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ed4e8 00:36:42.985 [2024-07-21 08:32:52.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.491847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.504949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7da8 00:36:42.985 [2024-07-21 08:32:52.506914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.516462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df118 00:36:42.985 [2024-07-21 08:32:52.518014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.518058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.528006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e1b48 00:36:42.985 [2024-07-21 08:32:52.529891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.529922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.538756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fc998 00:36:42.985 [2024-07-21 08:32:52.539787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.539831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.552352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ebfd0 00:36:42.985 [2024-07-21 08:32:52.553487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.553521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.565764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f35f0 00:36:42.985 [2024-07-21 08:32:52.567089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.567122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.579891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f2d80 00:36:42.985 [2024-07-21 08:32:52.581400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.581444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.591546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190dece0 00:36:42.985 [2024-07-21 08:32:52.593602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.593646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:42.985 [2024-07-21 08:32:52.603355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6300 00:36:42.985 [2024-07-21 08:32:52.604326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.985 [2024-07-21 08:32:52.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.616542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190efae0 00:36:43.243 [2024-07-21 08:32:52.617700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.617729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.628632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fdeb0 00:36:43.243 [2024-07-21 08:32:52.629799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.629844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.642010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f31b8 00:36:43.243 [2024-07-21 08:32:52.643309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.643341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.655343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ef270 00:36:43.243 [2024-07-21 08:32:52.656805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.656832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.668679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f0788 00:36:43.243 [2024-07-21 08:32:52.670334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.670362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.680568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4578 00:36:43.243 [2024-07-21 08:32:52.681741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.681769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.693600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fa3a0 00:36:43.243 [2024-07-21 08:32:52.694589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.694628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.706710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7da8 00:36:43.243 [2024-07-21 08:32:52.707823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.707852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.718227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fd208 00:36:43.243 [2024-07-21 08:32:52.719997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.720026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.728684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e1710 00:36:43.243 [2024-07-21 08:32:52.729657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.729685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.741973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df550 00:36:43.243 [2024-07-21 08:32:52.743107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.743155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.755335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f31b8 00:36:43.243 [2024-07-21 08:32:52.756675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.756705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.769631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ddc00 00:36:43.243 [2024-07-21 08:32:52.771171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.771202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.782891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ec840 00:36:43.243 [2024-07-21 08:32:52.784562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.784589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.795036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f0ff8 00:36:43.243 [2024-07-21 08:32:52.796700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.796728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.806911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ec840 00:36:43.243 [2024-07-21 08:32:52.808078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.808108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:43.243 [2024-07-21 08:32:52.819864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190de038 00:36:43.243 [2024-07-21 08:32:52.820866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.243 [2024-07-21 08:32:52.820894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:43.244 [2024-07-21 08:32:52.834388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6458 00:36:43.244 [2024-07-21 08:32:52.836366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.244 [2024-07-21 08:32:52.836393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:43.244 [2024-07-21 08:32:52.846256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fd208 00:36:43.244 [2024-07-21 08:32:52.847748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.244 [2024-07-21 08:32:52.847776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.244 [2024-07-21 08:32:52.857854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f0788 00:36:43.244 [2024-07-21 08:32:52.859847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.244 [2024-07-21 08:32:52.859880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.244 [2024-07-21 08:32:52.868811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f8e88 00:36:43.244 [2024-07-21 08:32:52.869806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.244 [2024-07-21 08:32:52.869833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.882094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4de8 00:36:43.502 [2024-07-21 08:32:52.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.883249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.895445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f31b8 00:36:43.502 [2024-07-21 08:32:52.896777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.896822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.909611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7970 00:36:43.502 [2024-07-21 08:32:52.911138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.911166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.922789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4140 00:36:43.502 [2024-07-21 08:32:52.924416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.924449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.934758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eb328 00:36:43.502 [2024-07-21 08:32:52.936396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.936441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.947976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4140 00:36:43.502 [2024-07-21 08:32:52.949795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.949839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.961234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df550 00:36:43.502 [2024-07-21 08:32:52.963207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.963234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.973072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f2510 00:36:43.502 [2024-07-21 08:32:52.974552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.974579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.984706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3d08 00:36:43.502 [2024-07-21 08:32:52.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.986139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:52.997921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f1430 00:36:43.502 [2024-07-21 08:32:52.999542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:52.999588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.011217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f3a28 00:36:43.502 [2024-07-21 08:32:53.012996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.013029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.024544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190efae0 00:36:43.502 [2024-07-21 08:32:53.026558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.026623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.037890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e1f80 00:36:43.502 [2024-07-21 08:32:53.040038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.040066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.046915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f96f8 00:36:43.502 [2024-07-21 08:32:53.047867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.047918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.058960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fc998 00:36:43.502 [2024-07-21 08:32:53.059887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.059928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.073161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f4b08 00:36:43.502 [2024-07-21 08:32:53.074292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.074342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.086299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fe720 00:36:43.502 [2024-07-21 08:32:53.087585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.087624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.098375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3d08 00:36:43.502 [2024-07-21 08:32:53.099651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.099696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.111659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7538 00:36:43.502 [2024-07-21 08:32:53.113103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.113130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:43.502 [2024-07-21 08:32:53.123544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e1b48 00:36:43.502 [2024-07-21 08:32:53.124473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.502 [2024-07-21 08:32:53.124501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.136481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e95a0 00:36:43.760 [2024-07-21 08:32:53.137269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.137298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.149751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6020 00:36:43.760 [2024-07-21 08:32:53.150717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.150747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.162744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eea00 00:36:43.760 [2024-07-21 08:32:53.164008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.164036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.175737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ee190 00:36:43.760 [2024-07-21 08:32:53.177158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.177190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.187633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7538 00:36:43.760 [2024-07-21 08:32:53.189562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.189594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.198539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e8088 00:36:43.760 [2024-07-21 08:32:53.199453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.199484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.211848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f0bc0 00:36:43.760 [2024-07-21 08:32:53.212942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.212986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.225201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f8a50 00:36:43.760 [2024-07-21 08:32:53.226475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.226522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.239008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e49b0 00:36:43.760 [2024-07-21 08:32:53.240133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.240165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.251063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e84c0 00:36:43.760 [2024-07-21 08:32:53.253018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.253049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.261985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e99d8 00:36:43.760 [2024-07-21 08:32:53.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.262932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.276185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fbcf0 00:36:43.760 [2024-07-21 08:32:53.277276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.277308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.289284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6b70 00:36:43.760 [2024-07-21 08:32:53.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.290565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.301296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e9e10 00:36:43.760 [2024-07-21 08:32:53.302533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.302565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.314559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f2510 00:36:43.760 [2024-07-21 08:32:53.315947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.315990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.326407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fda78 00:36:43.760 [2024-07-21 08:32:53.327310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.327337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.340491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f1868 00:36:43.760 [2024-07-21 08:32:53.342064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.342096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.353779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ea680 00:36:43.760 [2024-07-21 08:32:53.355551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.365652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e84c0 00:36:43.760 [2024-07-21 08:32:53.366919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.366947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:43.760 [2024-07-21 08:32:53.378469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e2c28 00:36:43.760 [2024-07-21 08:32:53.379581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.760 [2024-07-21 08:32:53.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.390554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eaef0 00:36:44.020 [2024-07-21 08:32:53.392534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.392566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.401531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3060 00:36:44.020 [2024-07-21 08:32:53.402434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.402470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.415899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ebb98 00:36:44.020 [2024-07-21 08:32:53.417001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.417049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.429075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6458 00:36:44.020 [2024-07-21 08:32:53.430336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.430363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.443622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e4140 00:36:44.020 [2024-07-21 08:32:53.445538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.445570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.455481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ff3c8 00:36:44.020 [2024-07-21 08:32:53.456913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.456941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.467071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6738 00:36:44.020 [2024-07-21 08:32:53.469042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.469075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.477985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fef90 00:36:44.020 [2024-07-21 08:32:53.478874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.478916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.491250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f20d8 00:36:44.020 [2024-07-21 08:32:53.492308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.504496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f8618 00:36:44.020 [2024-07-21 08:32:53.505799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.517701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fac10 00:36:44.020 [2024-07-21 08:32:53.519102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.519146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.530949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6cc8 00:36:44.020 [2024-07-21 08:32:53.532525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.532567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.544264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f20d8 00:36:44.020 [2024-07-21 08:32:53.546035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.546063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.557555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f1868 00:36:44.020 [2024-07-21 08:32:53.559480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.559507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.567703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e27f0 00:36:44.020 [2024-07-21 08:32:53.568891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.568933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.581648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f20d8 00:36:44.020 [2024-07-21 08:32:53.582747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.582777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.593721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f1430 00:36:44.020 [2024-07-21 08:32:53.595584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.595625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.607734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f0788 00:36:44.020 [2024-07-21 08:32:53.609259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.609287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.619737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5220 00:36:44.020 [2024-07-21 08:32:53.621264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.621292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.631547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3060 00:36:44.020 [2024-07-21 08:32:53.632619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.632647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:44.020 [2024-07-21 08:32:53.644409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ebfd0 00:36:44.020 [2024-07-21 08:32:53.645307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.020 [2024-07-21 08:32:53.645336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:44.279 [2024-07-21 08:32:53.657824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6020 00:36:44.280 [2024-07-21 08:32:53.658899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.658929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.669831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6738 00:36:44.280 [2024-07-21 08:32:53.671738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.671768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.681522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fa3a0 00:36:44.280 [2024-07-21 08:32:53.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.682438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.694627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eff18 00:36:44.280 [2024-07-21 08:32:53.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.695691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.709162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5a90 00:36:44.280 [2024-07-21 08:32:53.710930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.710957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.721162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fe720 00:36:44.280 [2024-07-21 08:32:53.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.722400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.734124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f1ca0 00:36:44.280 [2024-07-21 08:32:53.735209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.735244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.746171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5220 00:36:44.280 [2024-07-21 08:32:53.748112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.748144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.758000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6300 00:36:44.280 [2024-07-21 08:32:53.758904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.758934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.772342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e6b70 00:36:44.280 [2024-07-21 08:32:53.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.773906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.785723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f5378 00:36:44.280 [2024-07-21 08:32:53.787395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.797567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fb8b8 00:36:44.280 [2024-07-21 08:32:53.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.798850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.811801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f9b30 00:36:44.280 [2024-07-21 08:32:53.813690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.813717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.823721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5220 00:36:44.280 [2024-07-21 08:32:53.825075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.825118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.835347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190eee38 00:36:44.280 [2024-07-21 08:32:53.837254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.837287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.847196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190feb58 00:36:44.280 [2024-07-21 08:32:53.848079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.848125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.859088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e49b0 00:36:44.280 [2024-07-21 08:32:53.859929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.859971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.872919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f7100 00:36:44.280 [2024-07-21 08:32:53.873600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.873638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.886179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fd640 00:36:44.280 [2024-07-21 08:32:53.887045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.887074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:44.280 [2024-07-21 08:32:53.899461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190ea248 00:36:44.280 [2024-07-21 08:32:53.900488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.280 [2024-07-21 08:32:53.900517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.911424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e3d08 00:36:44.539 [2024-07-21 08:32:53.913393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.913425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.922302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5ec8 00:36:44.539 [2024-07-21 08:32:53.923129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.923156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.935592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f4f40 00:36:44.539 [2024-07-21 08:32:53.936628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.936655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.948879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190fd640 00:36:44.539 [2024-07-21 08:32:53.950050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.950082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.962173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f3e60 00:36:44.539 [2024-07-21 08:32:53.963508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.963541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.975475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190df988 00:36:44.539 [2024-07-21 08:32:53.976977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.977010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:53.988772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f4f40 00:36:44.539 [2024-07-21 08:32:53.990444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:53.990487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.002071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f5378 00:36:44.539 [2024-07-21 08:32:54.003930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.003962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.013953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e88f8 00:36:44.539 [2024-07-21 08:32:54.015295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.015323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.025500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e01f8 00:36:44.539 [2024-07-21 08:32:54.027350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.027382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.036390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190de038 00:36:44.539 [2024-07-21 08:32:54.037211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.037243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.050474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190f6458 00:36:44.539 [2024-07-21 08:32:54.051486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.051534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.063573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e5a90 00:36:44.539 [2024-07-21 08:32:54.064794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.064842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:44.539 [2024-07-21 08:32:54.076480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef300) with pdu=0x2000190e7818 00:36:44.539 [2024-07-21 08:32:54.077662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.539 [2024-07-21 08:32:54.077690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:44.539 00:36:44.539 Latency(us) 00:36:44.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.539 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:44.539 nvme0n1 : 2.01 20108.22 78.55 0.00 0.00 6354.72 3373.89 15728.64 00:36:44.539 =================================================================================================================== 00:36:44.539 Total : 20108.22 78.55 0.00 0.00 6354.72 3373.89 15728.64 00:36:44.539 0 00:36:44.539 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.539 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.539 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.539 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.539 | .driver_specific 00:36:44.539 | .nvme_error 00:36:44.539 | .status_code 00:36:44.539 | .command_transient_transport_error' 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 76980 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 76980 ']' 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 76980 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76980 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76980' 00:36:44.799 killing process with pid 76980 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 76980 00:36:44.799 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.799 00:36:44.799 Latency(us) 00:36:44.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.799 =================================================================================================================== 00:36:44.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.799 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 76980 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=77381 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 77381 /var/tmp/bperf.sock 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 77381 ']' 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:45.057 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.057 [2024-07-21 08:32:54.645681] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:45.057 [2024-07-21 08:32:54.645780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77381 ] 00:36:45.057 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:45.057 Zero copy mechanism will not be used. 00:36:45.057 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.314 [2024-07-21 08:32:54.703737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.314 [2024-07-21 08:32:54.789324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.314 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:45.314 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:45.315 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:45.315 08:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:45.572 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.145 nvme0n1 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:46.145 08:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:46.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:46.145 Zero copy mechanism will not be used. 00:36:46.145 Running I/O for 2 seconds... 00:36:46.145 [2024-07-21 08:32:55.700088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.700489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.700534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.706350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.706719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.706751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.712328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.712714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.712748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.718244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.718627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.718658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.724342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.724687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.724716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.730338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.730759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.736743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.737094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.145 [2024-07-21 08:32:55.737130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.145 [2024-07-21 08:32:55.743819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.145 [2024-07-21 08:32:55.744157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.744192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.146 [2024-07-21 08:32:55.750441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.146 [2024-07-21 08:32:55.750541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.750572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.146 [2024-07-21 08:32:55.756802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.146 [2024-07-21 08:32:55.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.757170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.146 [2024-07-21 08:32:55.762893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.146 [2024-07-21 08:32:55.763284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.763313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.146 [2024-07-21 08:32:55.768853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.146 [2024-07-21 08:32:55.769250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.769284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.146 [2024-07-21 08:32:55.774744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.146 [2024-07-21 08:32:55.775080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.146 [2024-07-21 08:32:55.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.780563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.780934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.787261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.787635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.787665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.794236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.794621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.794655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.800983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.801317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.801352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.806831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.806960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.806991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.813288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.813661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.813707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.819886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.820235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.828380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.828724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.828758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.835826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.836194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.836228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.843882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.844231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.844265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.851559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.406 [2024-07-21 08:32:55.851976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.406 [2024-07-21 08:32:55.852011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.406 [2024-07-21 08:32:55.859510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.859845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.859874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.866924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.867277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.867317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.874815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.875158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.882228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.882565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.882604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.889846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.890250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.890284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.897464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.897854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.897886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.905260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.905593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.905636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.912793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.913143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.913177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.920626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.920981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.921015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.927902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.928268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.935276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.935636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.935667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.941850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.942216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.942251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.949146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.949482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.955735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.956098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.956132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.963714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.964094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.964129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.971968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.972343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.972378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.979834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.979959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.979990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.988478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.988812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.988842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:55.996597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:55.996936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:55.996975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:56.003946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:56.004300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:56.004334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:56.010721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:56.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:56.011094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:56.017735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:56.018093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:56.018128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:56.024570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:56.024896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:56.024947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.407 [2024-07-21 08:32:56.031148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.407 [2024-07-21 08:32:56.031242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.407 [2024-07-21 08:32:56.031273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.668 [2024-07-21 08:32:56.038127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.668 [2024-07-21 08:32:56.038443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.668 [2024-07-21 08:32:56.038482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.668 [2024-07-21 08:32:56.043870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.668 [2024-07-21 08:32:56.044216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.668 [2024-07-21 08:32:56.044250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.668 [2024-07-21 08:32:56.049748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.668 [2024-07-21 08:32:56.050096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.668 [2024-07-21 08:32:56.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.668 [2024-07-21 08:32:56.055691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.668 [2024-07-21 08:32:56.056080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.056115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.062106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.062480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.062509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.068879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.069238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.069275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.074648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.074986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.075020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.080590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.080983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.081018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.086914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.087266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.087295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.093816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.094193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.100650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.101014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.106492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.106838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.106868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.112548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.112884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.112926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.118429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.118808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.118838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.124522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.124622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.124653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.131855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.132267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.132296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.138792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.139143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.139177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.145880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.146239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.146268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.152424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.152809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.158143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.158516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.163996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.164333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.164372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.169738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.170081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.170115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.175558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.175919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.175949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.181704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.182050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.182079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.189049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.189397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.189431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.194951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.195285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.195319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.200832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.201243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.201277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.206865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.207238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.207267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.212699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.213101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.213135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.219504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.219870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.219900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.227017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.227403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.233956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.234327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.669 [2024-07-21 08:32:56.234360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.669 [2024-07-21 08:32:56.240983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.669 [2024-07-21 08:32:56.241349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.241382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.247774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.248108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.253823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.254179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.254213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.260120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.260454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.260488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.266240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.266574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.266607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.272804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.273148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.273183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.278743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.279077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.279111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.284507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.284850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.284879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.290141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.670 [2024-07-21 08:32:56.290234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.670 [2024-07-21 08:32:56.290264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.670 [2024-07-21 08:32:56.297470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.297818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.297848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.303562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.303954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.303988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.309299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.309675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.309710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.315303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.315642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.315689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.322164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.322499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.322533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.329107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.329468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.329508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.336101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.336439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.336473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.342537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.342887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.342933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.348622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.348963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.348997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.354484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.354849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.360360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.360773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.366247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.366610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.372766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.373227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.373261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.379904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.380268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.380298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.386780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.387161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.387195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.393690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.394049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.400016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.400384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.400417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.406795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.407154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.407187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.413681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.414015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.414052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.420541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.420864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.420894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.427298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.427653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.427686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.434098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.434433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.434467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.440709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.441035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.441069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.447937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.448321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.448354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.454452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.454817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.454847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.460167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.460502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.460536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.466013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.930 [2024-07-21 08:32:56.466358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.930 [2024-07-21 08:32:56.466389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.930 [2024-07-21 08:32:56.471832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.472168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.472202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.477828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.478186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.478219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.484497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.484862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.491390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.491739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.491767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.498285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.498634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.498663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.504624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.505028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.505061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.512264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.512600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.512641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.519313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.519718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.519762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.525771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.526088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.532573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.532916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.532946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.539561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.539907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.539936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.547295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.547644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.547683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.931 [2024-07-21 08:32:56.554256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:46.931 [2024-07-21 08:32:56.554633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.931 [2024-07-21 08:32:56.554689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.560489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.560831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.566693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.567026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.567065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.572585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.572922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.579519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.579844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.579872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.585881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.586230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.586267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.593000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.593352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.593386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.600725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.601111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.601145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.609391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.609740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.609770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.617511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.617852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.617901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.625410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.625754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.625784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.634010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.634348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.634381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.642091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.642440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.642474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.650546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.650986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.651019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.658949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.659285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.659318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.666865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.667264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.667297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.675427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.192 [2024-07-21 08:32:56.675768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.192 [2024-07-21 08:32:56.675798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.192 [2024-07-21 08:32:56.683832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.684226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.684260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.691907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.692250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.692284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.699166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.699514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.699548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.705326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.705712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.705740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.711132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.711485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.717029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.717397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.717431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.722843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.723185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.723218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.728827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.729184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.729217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.735351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.735691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.735721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.741878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.742201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.742231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.747895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.748239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.748268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.753284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.753648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.753684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.758921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.759261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.759291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.764880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.765190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.765221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.771094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.771410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.771440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.777001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.777310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.777340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.783109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.783418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.789425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.789798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.789829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.796578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.796955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.803310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.803668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.803698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.810571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.810899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.810929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.193 [2024-07-21 08:32:56.816947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.193 [2024-07-21 08:32:56.817053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.193 [2024-07-21 08:32:56.817081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.824487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.824845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.824877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.832160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.832555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.832586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.839471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.839882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.839928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.846917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.847204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.847234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.854368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.854794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.854840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.862041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.862439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.862470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.869886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.870280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.870311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.877467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.877823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.877855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.885056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.885372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.885403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.892154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.892580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.892630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.899683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.899987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.907112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.907450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.907499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.914502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.914856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.914886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.922089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.922388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.922422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.929592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.929894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.929926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.936790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.937144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.937175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.944194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.944490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.944521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.950890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.951242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.956720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.956974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.957004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.961758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.962041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.962071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.967094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.967356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.972577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.972828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.972860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.978143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.978401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.978441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.983953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.984196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.984228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.990384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.990757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.990788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:56.997644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:56.998029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:56.998058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.004466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.004744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.004789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.010328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.010590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.010645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.015441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.015719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.015750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.021071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.021320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.021350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.026731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.027025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.027056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.032946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.033214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.037890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.038126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.038166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.042781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.043019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.047604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.047892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.052458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.052704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.057157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.057399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.057430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.062596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.062840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.062870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.067985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.068227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.068272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.072890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.073131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.073167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.077781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.078025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.078055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.454 [2024-07-21 08:32:57.082882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.454 [2024-07-21 08:32:57.083128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.454 [2024-07-21 08:32:57.083158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.088782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.089026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.089057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.093729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.093985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.094015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.098563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.098802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.103367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.103608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.103645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.108897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.109162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.109191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.114496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.114760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.114791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.119425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.119680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.119711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.124220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.124462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.124493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.129088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.129327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.129380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.134107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.134360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.134390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.139006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.139247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.139277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.144023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.144330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.144363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.149328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.149622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.154193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.154441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.154471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.159178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.159420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.159465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.164971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.165255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.713 [2024-07-21 08:32:57.165294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.713 [2024-07-21 08:32:57.171543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.713 [2024-07-21 08:32:57.171843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.171874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.179011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.179365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.186260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.186546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.186576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.193112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.193426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.193470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.199999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.200341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.206826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.207144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.207174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.214309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.214625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.214669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.221423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.221689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.227771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.228040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.233542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.233830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.233861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.238561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.238799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.238830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.243459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.243702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.243734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.248341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.248584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.248624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.254251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.254512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.254542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.259728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.259969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.260014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.266214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.266540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.266584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.272798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.273041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.273071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.279256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.279531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.279561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.285745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.286006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.286059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.292224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.292514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.292544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.298112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.298403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.298434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.304732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.304976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.305006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.311248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.311490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.311519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.317969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.318253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.318282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.323418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.323666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.323696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.328432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.328680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.328726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.333287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.333524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.333554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.714 [2024-07-21 08:32:57.338146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.714 [2024-07-21 08:32:57.338390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.714 [2024-07-21 08:32:57.338420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.343056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.343293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.343333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.348111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.348391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.348439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.353803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.354046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.359421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.359704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.359735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.366087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.366357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.366387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.372005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.372261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.377645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.377892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.377923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.383174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.383432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.383461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.389703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.389978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.390023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.395439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.395745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.395777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.400515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.400777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.400807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.405402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.405679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.410409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.410653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.410687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.415274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.415544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.415574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.421146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.421454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.421484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.428077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.428407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.428437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.434931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.435244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.435275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.442609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.442922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.442968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.972 [2024-07-21 08:32:57.450104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.972 [2024-07-21 08:32:57.450417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.972 [2024-07-21 08:32:57.450450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.457443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.457774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.457805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.464125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.464534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.471127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.471418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.471449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.478219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.478538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.478574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.485326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.485664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.485696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.492803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.493109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.493140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.499429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.499695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.499726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.506469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.506807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.506838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.513578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.513851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.513898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.519625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.519882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.519913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.525821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.526070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.526100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.530744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.530985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.531015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.535667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.540775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.541015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.541045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.545742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.545982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.546023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.550987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.551233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.551264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.556223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.556472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.556517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.561501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.561754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.561785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.566328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.566568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.566597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.571885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.572122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.572165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.577173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.577414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.577444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.582792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.583039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.583068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.588241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.588476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.588515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.593107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.593345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.593374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.973 [2024-07-21 08:32:57.598147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:47.973 [2024-07-21 08:32:57.598386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.973 [2024-07-21 08:32:57.598416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.602877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.603118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.603162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.607707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.607961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.607991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.612529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.612806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.617396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.617682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.622234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.622484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.622520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.627105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.627356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.627410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.632148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.632388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.632432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.637158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.637413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.637443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.642194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.642442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.642488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.647161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.647409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.652047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.652285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.652315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.656834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.657074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.657103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.661749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.661992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.662022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.666609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.666992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.667021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.671995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.672254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.672284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.677937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.678185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.678215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.682882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.683119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.683160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.687812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.688061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.688093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.692749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.692991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.693022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.231 [2024-07-21 08:32:57.697580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11ef640) with pdu=0x2000190fef90 00:36:48.231 [2024-07-21 08:32:57.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.231 [2024-07-21 08:32:57.697786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.231 00:36:48.231 Latency(us) 00:36:48.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.231 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:48.231 nvme0n1 : 2.00 4911.06 613.88 0.00 0.00 3250.21 2196.67 8592.50 00:36:48.231 =================================================================================================================== 00:36:48.231 Total : 4911.06 613.88 0.00 0.00 3250.21 2196.67 8592.50 00:36:48.231 0 00:36:48.231 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:48.231 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:48.232 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:48.232 | .driver_specific 00:36:48.232 | .nvme_error 00:36:48.232 | .status_code 00:36:48.232 | .command_transient_transport_error' 00:36:48.232 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 317 > 0 )) 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 77381 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 77381 ']' 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 77381 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77381 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77381' 00:36:48.491 killing process with pid 77381 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 77381 00:36:48.491 Received shutdown signal, test time was about 2.000000 seconds 00:36:48.491 00:36:48.491 Latency(us) 00:36:48.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.491 =================================================================================================================== 00:36:48.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:48.491 08:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 77381 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 76017 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 76017 ']' 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 76017 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76017 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76017' 00:36:48.750 killing process with pid 76017 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 76017 00:36:48.750 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 76017 00:36:49.010 00:36:49.010 real 0m15.169s 00:36:49.010 user 0m29.867s 00:36:49.010 sys 0m4.262s 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.010 ************************************ 00:36:49.010 END TEST nvmf_digest_error 00:36:49.010 ************************************ 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:49.010 rmmod nvme_tcp 00:36:49.010 rmmod nvme_fabrics 00:36:49.010 rmmod nvme_keyring 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 76017 ']' 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 76017 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 76017 ']' 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 76017 00:36:49.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (76017) - No such process 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 76017 is not found' 00:36:49.010 Process with pid 76017 is not found 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:49.010 08:32:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.544 08:33:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:51.544 00:36:51.544 real 0m34.880s 00:36:51.544 user 1m0.729s 00:36:51.544 sys 0m10.224s 00:36:51.544 08:33:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:51.544 08:33:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.544 ************************************ 00:36:51.544 END TEST nvmf_digest 00:36:51.544 ************************************ 00:36:51.544 08:33:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:51.544 08:33:00 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:51.544 08:33:00 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:51.544 08:33:00 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:51.544 08:33:00 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:51.544 08:33:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:51.544 08:33:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:51.544 08:33:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.544 ************************************ 00:36:51.544 START TEST nvmf_bdevperf 00:36:51.544 ************************************ 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:51.544 * Looking for test storage... 00:36:51.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:51.544 08:33:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:53.447 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:53.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:53.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:53.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:53.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:53.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:36:53.448 00:36:53.448 --- 10.0.0.2 ping statistics --- 00:36:53.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.448 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:36:53.448 00:36:53.448 --- 10.0.0.1 ping statistics --- 00:36:53.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.448 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=79863 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 79863 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 79863 ']' 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:53.448 08:33:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.448 [2024-07-21 08:33:02.972180] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:53.448 [2024-07-21 08:33:02.972261] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.448 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.448 [2024-07-21 08:33:03.038694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:53.708 [2024-07-21 08:33:03.126560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.708 [2024-07-21 08:33:03.126660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.708 [2024-07-21 08:33:03.126687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.708 [2024-07-21 08:33:03.126698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.708 [2024-07-21 08:33:03.126709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.708 [2024-07-21 08:33:03.126768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.708 [2024-07-21 08:33:03.126798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:53.708 [2024-07-21 08:33:03.126801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.708 [2024-07-21 08:33:03.266345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.708 Malloc0 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.708 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:53.966 [2024-07-21 08:33:03.339224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:53.967 { 00:36:53.967 "params": { 00:36:53.967 "name": "Nvme$subsystem", 00:36:53.967 "trtype": "$TEST_TRANSPORT", 00:36:53.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.967 "adrfam": "ipv4", 00:36:53.967 "trsvcid": "$NVMF_PORT", 00:36:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.967 "hdgst": ${hdgst:-false}, 00:36:53.967 "ddgst": ${ddgst:-false} 00:36:53.967 }, 00:36:53.967 "method": "bdev_nvme_attach_controller" 00:36:53.967 } 00:36:53.967 EOF 00:36:53.967 )") 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:53.967 08:33:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:53.967 "params": { 00:36:53.967 "name": "Nvme1", 00:36:53.967 "trtype": "tcp", 00:36:53.967 "traddr": "10.0.0.2", 00:36:53.967 "adrfam": "ipv4", 00:36:53.967 "trsvcid": "4420", 00:36:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.967 "hdgst": false, 00:36:53.967 "ddgst": false 00:36:53.967 }, 00:36:53.967 "method": "bdev_nvme_attach_controller" 00:36:53.967 }' 00:36:53.967 [2024-07-21 08:33:03.387350] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:53.967 [2024-07-21 08:33:03.387439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80002 ] 00:36:53.967 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.967 [2024-07-21 08:33:03.449263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.967 [2024-07-21 08:33:03.537828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.224 Running I/O for 1 seconds... 00:36:55.600 00:36:55.600 Latency(us) 00:36:55.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.600 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:55.600 Verification LBA range: start 0x0 length 0x4000 00:36:55.600 Nvme1n1 : 1.01 8539.25 33.36 0.00 0.00 14926.40 964.84 15243.19 00:36:55.600 =================================================================================================================== 00:36:55.600 Total : 8539.25 33.36 0.00 0.00 14926.40 964.84 15243.19 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=80148 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:55.600 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:55.600 { 00:36:55.600 "params": { 00:36:55.600 "name": "Nvme$subsystem", 00:36:55.600 "trtype": "$TEST_TRANSPORT", 00:36:55.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:55.601 "adrfam": "ipv4", 00:36:55.601 "trsvcid": "$NVMF_PORT", 00:36:55.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:55.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:55.601 "hdgst": ${hdgst:-false}, 00:36:55.601 "ddgst": ${ddgst:-false} 00:36:55.601 }, 00:36:55.601 "method": "bdev_nvme_attach_controller" 00:36:55.601 } 00:36:55.601 EOF 00:36:55.601 )") 00:36:55.601 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:55.601 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:55.601 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:55.601 08:33:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:55.601 "params": { 00:36:55.601 "name": "Nvme1", 00:36:55.601 "trtype": "tcp", 00:36:55.601 "traddr": "10.0.0.2", 00:36:55.601 "adrfam": "ipv4", 00:36:55.601 "trsvcid": "4420", 00:36:55.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:55.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:55.601 "hdgst": false, 00:36:55.601 "ddgst": false 00:36:55.601 }, 00:36:55.601 "method": "bdev_nvme_attach_controller" 00:36:55.601 }' 00:36:55.601 [2024-07-21 08:33:05.077393] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:36:55.601 [2024-07-21 08:33:05.077485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80148 ] 00:36:55.601 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.601 [2024-07-21 08:33:05.137178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.601 [2024-07-21 08:33:05.225445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.860 Running I/O for 15 seconds... 00:36:59.161 08:33:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 79863 00:36:59.161 08:33:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:59.161 [2024-07-21 08:33:08.045611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.045927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.045978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.161 [2024-07-21 08:33:08.046491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.161 [2024-07-21 08:33:08.046508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.046524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.046557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.046590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.046972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.046988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.162 [2024-07-21 08:33:08.047409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.047974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.047989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.162 [2024-07-21 08:33:08.048006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.162 [2024-07-21 08:33:08.048021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.163 [2024-07-21 08:33:08.048474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.048977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.048994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.163 [2024-07-21 08:33:08.049432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.163 [2024-07-21 08:33:08.049449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.164 [2024-07-21 08:33:08.049842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.049870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.049929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.049998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.050014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.050030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.050045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.050062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.050077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.050094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.164 [2024-07-21 08:33:08.050110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.050126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a1f0 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.050144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:59.164 [2024-07-21 08:33:08.050157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:59.164 [2024-07-21 08:33:08.050170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30984 len:8 PRP1 0x0 PRP2 0x0 00:36:59.164 [2024-07-21 08:33:08.050184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.164 [2024-07-21 08:33:08.050248] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0a1f0 was disconnected and freed. reset controller. 00:36:59.164 [2024-07-21 08:33:08.053968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.054052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.054776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.054818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.054835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.055075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.055319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.164 [2024-07-21 08:33:08.055343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.164 [2024-07-21 08:33:08.055361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.164 [2024-07-21 08:33:08.059006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.164 [2024-07-21 08:33:08.068124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.068606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.068641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.068670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.068883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.069146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.164 [2024-07-21 08:33:08.069169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.164 [2024-07-21 08:33:08.069186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.164 [2024-07-21 08:33:08.072797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.164 [2024-07-21 08:33:08.082182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.082592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.082638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.082676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.082905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.083157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.164 [2024-07-21 08:33:08.083181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.164 [2024-07-21 08:33:08.083197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.164 [2024-07-21 08:33:08.086812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.164 [2024-07-21 08:33:08.095870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.096366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.096397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.096415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.096675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.096896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.164 [2024-07-21 08:33:08.096935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.164 [2024-07-21 08:33:08.096951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.164 [2024-07-21 08:33:08.100556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.164 [2024-07-21 08:33:08.109921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.110337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.110368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.110399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.110664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.110919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.164 [2024-07-21 08:33:08.110940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.164 [2024-07-21 08:33:08.110954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.164 [2024-07-21 08:33:08.114574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.164 [2024-07-21 08:33:08.123473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.164 [2024-07-21 08:33:08.123828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.164 [2024-07-21 08:33:08.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.164 [2024-07-21 08:33:08.123875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.164 [2024-07-21 08:33:08.124099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.164 [2024-07-21 08:33:08.124311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.124332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.124347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.127552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.136922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.137276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.137304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.137320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.137541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.137795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.137816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.137829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.141375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.150910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.151312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.151363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.151379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.151654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.151867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.151906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.151921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.155406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.164886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.165285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.165324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.165342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.165586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.165837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.165862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.165879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.169470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.178736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.179163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.179212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.179231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.179468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.179722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.179747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.179763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.183329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.192586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.193015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.193047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.193065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.193303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.193545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.193569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.193585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.197175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.206437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.206867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.206896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.206912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.207164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.207419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.207444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.207460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.211039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.220292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.220739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.220757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.221004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.221247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.221271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.221287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.224862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.234121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.234541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.234594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.234840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.235084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.235108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.235125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.238698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.247956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.248367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.165 [2024-07-21 08:33:08.248398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.165 [2024-07-21 08:33:08.248416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.165 [2024-07-21 08:33:08.248671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.165 [2024-07-21 08:33:08.248915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.165 [2024-07-21 08:33:08.248939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.165 [2024-07-21 08:33:08.248955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.165 [2024-07-21 08:33:08.252518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.165 [2024-07-21 08:33:08.261799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.165 [2024-07-21 08:33:08.262199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.262240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.262259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.262496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.262754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.262779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.262795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.266362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.275649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.276065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.276096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.276124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.276361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.276604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.276637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.276654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.280197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.289492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.289869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.289897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.289939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.290177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.290420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.290444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.290465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.294064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.303420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.303792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.303821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.303837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.304070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.304312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.304337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.304353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.307980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.317357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.317727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.317755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.317771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.318010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.318253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.318276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.318292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.321918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.331212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.331629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.331661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.331679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.331918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.332159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.332184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.332199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.335789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.345052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.345437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.345474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.345490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.345752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.345997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.346020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.346036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.349605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.359129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.359507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.359546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.359564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.359811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.360054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.360079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.360095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.363667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.373135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.373557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.373588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.373623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.373863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.374106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.374130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.374146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.377721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.386996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.387371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.387412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.387429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.387676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.387926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.387950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.387965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.391531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.166 [2024-07-21 08:33:08.401004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.166 [2024-07-21 08:33:08.401404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.166 [2024-07-21 08:33:08.401438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.166 [2024-07-21 08:33:08.401456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.166 [2024-07-21 08:33:08.401708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.166 [2024-07-21 08:33:08.401952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.166 [2024-07-21 08:33:08.401976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.166 [2024-07-21 08:33:08.401992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.166 [2024-07-21 08:33:08.405557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.415030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.415431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.415461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.415479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.415732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.415975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.416000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.416016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.419576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.429059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.429434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.429474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.429491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.429740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.429983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.430007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.430024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.433594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.443063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.443506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.443552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.443570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.443818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.444062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.444086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.444102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.447686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.456942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.457398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.457438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.457456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.457705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.457949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.457973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.457989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.461559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.470825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.471229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.471305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.471543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.471794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.471818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.471835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.475407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.484670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.485048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.485078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.485101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.485339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.485581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.485628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.485647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.489213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.498694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.499102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.499133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.499151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.499388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.499640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.499666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.499682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.503246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.512717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.513138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.513169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.513193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.513430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.513682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.513706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.513722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.517290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.526550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.526936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.526967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.526985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.527223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.527465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.527495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.527512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.531086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.540556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.540970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.541019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.541042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.541279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.541522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.541546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.167 [2024-07-21 08:33:08.541563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.167 [2024-07-21 08:33:08.545131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.167 [2024-07-21 08:33:08.554533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.167 [2024-07-21 08:33:08.554968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.167 [2024-07-21 08:33:08.555028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.167 [2024-07-21 08:33:08.555046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.167 [2024-07-21 08:33:08.555284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.167 [2024-07-21 08:33:08.555527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.167 [2024-07-21 08:33:08.555551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.555567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.559147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.568431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.568846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.568878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.568895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.569133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.569375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.569399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.569416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.572992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.582458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.582845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.582880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.582898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.583135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.583378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.583403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.583418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.586998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.596470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.596945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.596977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.596995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.597234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.597477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.597501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.597517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.601092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.610359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.610779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.610811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.610832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.611069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.611311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.611335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.611351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.614923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.624393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.624773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.624805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.624828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.625068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.625311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.625335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.625358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.628933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.638419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.638838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.638871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.638890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.639128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.639381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.639405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.639426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.643002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.652300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.652704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.652736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.652754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.652992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.653244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.653268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.653283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.656867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.666145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.666523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.666565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.666583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.666836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.667080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.667109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.667127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.670698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.680165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.680565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.680603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.680630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.680869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.681112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.681135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.681151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.684725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.694203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.694620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.694652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.168 [2024-07-21 08:33:08.694670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.168 [2024-07-21 08:33:08.694908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.168 [2024-07-21 08:33:08.695155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.168 [2024-07-21 08:33:08.695179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.168 [2024-07-21 08:33:08.695195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.168 [2024-07-21 08:33:08.698771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.168 [2024-07-21 08:33:08.708045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.168 [2024-07-21 08:33:08.708471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.168 [2024-07-21 08:33:08.708502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.708521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.708767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.709010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.709035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.709051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.712621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.169 [2024-07-21 08:33:08.721941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.169 [2024-07-21 08:33:08.722332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-21 08:33:08.722364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.722382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.722627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.722871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.722895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.722915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.726483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.169 [2024-07-21 08:33:08.735978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.169 [2024-07-21 08:33:08.736396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-21 08:33:08.736427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.736444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.736691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.736934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.736958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.736974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.740546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.169 [2024-07-21 08:33:08.749836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.169 [2024-07-21 08:33:08.750252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-21 08:33:08.750302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.750321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.750560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.750812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.750837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.750853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.754414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.169 [2024-07-21 08:33:08.763693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.169 [2024-07-21 08:33:08.764102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-21 08:33:08.764133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.764151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.764396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.764655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.764680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.764696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.768266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.169 [2024-07-21 08:33:08.777537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.169 [2024-07-21 08:33:08.777907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.169 [2024-07-21 08:33:08.777939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.169 [2024-07-21 08:33:08.777965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.169 [2024-07-21 08:33:08.778202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.169 [2024-07-21 08:33:08.778445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.169 [2024-07-21 08:33:08.778469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.169 [2024-07-21 08:33:08.778485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.169 [2024-07-21 08:33:08.782064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.430 [2024-07-21 08:33:08.791541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.430 [2024-07-21 08:33:08.791932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.430 [2024-07-21 08:33:08.791975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.430 [2024-07-21 08:33:08.791994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.430 [2024-07-21 08:33:08.792232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.430 [2024-07-21 08:33:08.792475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.430 [2024-07-21 08:33:08.792499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.430 [2024-07-21 08:33:08.792515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.430 [2024-07-21 08:33:08.796091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.430 [2024-07-21 08:33:08.805577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.430 [2024-07-21 08:33:08.806009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.430 [2024-07-21 08:33:08.806061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.430 [2024-07-21 08:33:08.806080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.430 [2024-07-21 08:33:08.806318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.430 [2024-07-21 08:33:08.806570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.430 [2024-07-21 08:33:08.806594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.430 [2024-07-21 08:33:08.806666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.430 [2024-07-21 08:33:08.810236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.430 [2024-07-21 08:33:08.819529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.430 [2024-07-21 08:33:08.819896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.430 [2024-07-21 08:33:08.819928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.430 [2024-07-21 08:33:08.819958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.430 [2024-07-21 08:33:08.820196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.430 [2024-07-21 08:33:08.820439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.430 [2024-07-21 08:33:08.820463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.430 [2024-07-21 08:33:08.820478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.430 [2024-07-21 08:33:08.824056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.430 [2024-07-21 08:33:08.833544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.430 [2024-07-21 08:33:08.833970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.430 [2024-07-21 08:33:08.834017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.430 [2024-07-21 08:33:08.834035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.430 [2024-07-21 08:33:08.834273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.430 [2024-07-21 08:33:08.834516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.430 [2024-07-21 08:33:08.834540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.430 [2024-07-21 08:33:08.834555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.430 [2024-07-21 08:33:08.838141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.430 [2024-07-21 08:33:08.847409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.430 [2024-07-21 08:33:08.847786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.430 [2024-07-21 08:33:08.847818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.430 [2024-07-21 08:33:08.847836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.430 [2024-07-21 08:33:08.848073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.848316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.848340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.848356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.851938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.861398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.861814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.861853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.861872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.862109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.862352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.862376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.862392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.865965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.875426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.875813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.875855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.875873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.876110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.876353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.876377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.876392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.879967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.889443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.889816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.889847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.889865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.890102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.890345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.890369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.890385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.893958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.903423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.903826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.903875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.904113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.904361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.904386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.904402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.907975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.917450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.917833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.917864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.917882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.918119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.918361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.918381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.918395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.921572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.931294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.931707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.931736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.931760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.931999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.932242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.932266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.932282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.935854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.944956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.945309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.945336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.945352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.945573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.945800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.945823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.945837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.949004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.958237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.958564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.958604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.958631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.958876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.431 [2024-07-21 08:33:08.959108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.431 [2024-07-21 08:33:08.959128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.431 [2024-07-21 08:33:08.959142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.431 [2024-07-21 08:33:08.962439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.431 [2024-07-21 08:33:08.972172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.431 [2024-07-21 08:33:08.972540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.431 [2024-07-21 08:33:08.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.431 [2024-07-21 08:33:08.972583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.431 [2024-07-21 08:33:08.972834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:08.973085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:08.973111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:08.973128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:08.976624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:08.985780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:08.986205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:08.986233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:08.986249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:08.986497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:08.986731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:08.986754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:08.986769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:08.989862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:08.999593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:08.999967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:09.000010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:09.000031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:09.000255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:09.000514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:09.000539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:09.000555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:09.004136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:09.013586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:09.013976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:09.014004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:09.014021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:09.014285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:09.014521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:09.014545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:09.014561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:09.017950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:09.026985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:09.027320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:09.027349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:09.027365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:09.027587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:09.027807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:09.027831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:09.027845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:09.030890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:09.040981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:09.041452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:09.041496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:09.041513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:09.041776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:09.042020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:09.042050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:09.042068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:09.045641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.432 [2024-07-21 08:33:09.054811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.432 [2024-07-21 08:33:09.055181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.432 [2024-07-21 08:33:09.055208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.432 [2024-07-21 08:33:09.055226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.432 [2024-07-21 08:33:09.055465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.432 [2024-07-21 08:33:09.055702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.432 [2024-07-21 08:33:09.055725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.432 [2024-07-21 08:33:09.055739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.432 [2024-07-21 08:33:09.059065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.068154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.068530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.068560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.068577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.069035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.069244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.069265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.069279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.072267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.081992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.082405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.082435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.082451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.082713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.082944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.082981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.082998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.086582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.095802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.096211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.096239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.096255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.096498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.096728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.096752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.096766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.099983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.109169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.109542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.109571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.109588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.109812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.110055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.110077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.110090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.113122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.123107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.123489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.123532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.123548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.123801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.124048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.124073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.124089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.127639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.136704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.137127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.137170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.137186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.137415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.137642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.137665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.137681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.140814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.150006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.150463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.692 [2024-07-21 08:33:09.150492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.692 [2024-07-21 08:33:09.150509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.692 [2024-07-21 08:33:09.150764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.692 [2024-07-21 08:33:09.151001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.692 [2024-07-21 08:33:09.151026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.692 [2024-07-21 08:33:09.151043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.692 [2024-07-21 08:33:09.154603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.692 [2024-07-21 08:33:09.163881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.692 [2024-07-21 08:33:09.164374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.164403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.164421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.164690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.164920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.164946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.164962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.168459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.177371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.177720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.177750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.177767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.177999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.178210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.178233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.178252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.181324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.191097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.191481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.191513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.191531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.191810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.192059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.192085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.192101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.195685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.204913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.205299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.205328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.205344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.205595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.205840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.205864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.205881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.209277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.218349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.218678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.218706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.218722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.218945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.219150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.219172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.219186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.222195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.232391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.232781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.232811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.232827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.233071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.233313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.233338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.233354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.236931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.246399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.246807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.246840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.246858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.247096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.247339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.247364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.247379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.250962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.260427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.260838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.260870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.260888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.261127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.261369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.261395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.261411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.264984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.274451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.274835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.274866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.274884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.275127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.275369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.275394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.275410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.278985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.288453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.288857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.288889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.288907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.289145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.289388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.289413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.289429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.293006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.693 [2024-07-21 08:33:09.302480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.693 [2024-07-21 08:33:09.302889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.693 [2024-07-21 08:33:09.302920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.693 [2024-07-21 08:33:09.302938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.693 [2024-07-21 08:33:09.303176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.693 [2024-07-21 08:33:09.303418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.693 [2024-07-21 08:33:09.303443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.693 [2024-07-21 08:33:09.303458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.693 [2024-07-21 08:33:09.307034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.694 [2024-07-21 08:33:09.316506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.694 [2024-07-21 08:33:09.316950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.694 [2024-07-21 08:33:09.316999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.694 [2024-07-21 08:33:09.317018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.694 [2024-07-21 08:33:09.317257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.694 [2024-07-21 08:33:09.317500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.694 [2024-07-21 08:33:09.317525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.694 [2024-07-21 08:33:09.317546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.321169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.330437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.330826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.330858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.330876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.331114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.331358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.331382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.331398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.334978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.344466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.344856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.344888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.344906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.345144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.345386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.345411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.345427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.349016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.358307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.358740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.358772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.358790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.359028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.359271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.359295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.359311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.362888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.372169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.372577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.372622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.372656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.372894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.373138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.373162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.373178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.376756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.386057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.386435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.386467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.386485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.386734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.386979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.387003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.387019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.390591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.400079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.400535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.400566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.400585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.400832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.401076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.401100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.401116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.404697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.413974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.414419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.414451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.414470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.414719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.414968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.414993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.415010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.418582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.427904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.428305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.428337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.428355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.428594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.428847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.428873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.428888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.432466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.441751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.442154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.442186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.442204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.442442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.442696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.442732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.442747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.446312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.455582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.456066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.456115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.955 [2024-07-21 08:33:09.456133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.955 [2024-07-21 08:33:09.456370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.955 [2024-07-21 08:33:09.456622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.955 [2024-07-21 08:33:09.456648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.955 [2024-07-21 08:33:09.456666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.955 [2024-07-21 08:33:09.460239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.955 [2024-07-21 08:33:09.469508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.955 [2024-07-21 08:33:09.469907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.955 [2024-07-21 08:33:09.469941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.469960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.470199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.470443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.470469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.470486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.474062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.483537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.483950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.483983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.484001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.484239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.484481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.484506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.484522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.488109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.497370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.497855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.497887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.497906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.498144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.498386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.498412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.498428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.502008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.511269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.511646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.511679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.511704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.511943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.512186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.512212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.512229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.515805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.525277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.525690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.525724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.525743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.525981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.526225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.526250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.526266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.529844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.539111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.539514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.539547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.539565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.539816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.540060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.540085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.540102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.543679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.552943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.553345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.553376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.553394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.553646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.553888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.553923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.553941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.557509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.566800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.567213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.567245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.567264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.567503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.567759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.567786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.567803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:59.956 [2024-07-21 08:33:09.571376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:59.956 [2024-07-21 08:33:09.580728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:59.956 [2024-07-21 08:33:09.581130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.956 [2024-07-21 08:33:09.581164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:36:59.956 [2024-07-21 08:33:09.581182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:36:59.956 [2024-07-21 08:33:09.581421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:36:59.956 [2024-07-21 08:33:09.581676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:59.956 [2024-07-21 08:33:09.581702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:59.956 [2024-07-21 08:33:09.581719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.217 [2024-07-21 08:33:09.585293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.217 [2024-07-21 08:33:09.594577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.217 [2024-07-21 08:33:09.595040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-07-21 08:33:09.595094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.217 [2024-07-21 08:33:09.595112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.217 [2024-07-21 08:33:09.595350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.217 [2024-07-21 08:33:09.595592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.217 [2024-07-21 08:33:09.595626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.217 [2024-07-21 08:33:09.595645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.599213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.608477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.608864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.608896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.608916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.609155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.609399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.609425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.609441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.613019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.622484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.622905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.622939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.622957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.623195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.623439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.623465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.623481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.627063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.636322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.636700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.636733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.636752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.636990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.637233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.637258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.637275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.640852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.650325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.650728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.650761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.650779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.651023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.651265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.651289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.651305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.654885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.664357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.664767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.664799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.664818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.665055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.665298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.665324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.665340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.668922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.678193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.678695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.678729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.678748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.678987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.679229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.679254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.679269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.682850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.692119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.692535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.692566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.692584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.692834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.693077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.693103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.693125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.696701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.705964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.706341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.706374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.706393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.706644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.706887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.706912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.706928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.710495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.719972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.720364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.720396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.720414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.720666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.720909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.720935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.720951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.724518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.733994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.734414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.734446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.734464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.734714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.734958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.218 [2024-07-21 08:33:09.734983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.218 [2024-07-21 08:33:09.735000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.218 [2024-07-21 08:33:09.738566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.218 [2024-07-21 08:33:09.747829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.218 [2024-07-21 08:33:09.748235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-07-21 08:33:09.748268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.218 [2024-07-21 08:33:09.748287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.218 [2024-07-21 08:33:09.748525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.218 [2024-07-21 08:33:09.748782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.748809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.748825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.752394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.761663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.762064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.762095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.762113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.762350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.762593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.762628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.762647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.766215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.775691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.776102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.776134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.776152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.776390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.776644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.776670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.776687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.780254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.789730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.790139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.790170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.790188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.790425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.790686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.790712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.790727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.794294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.803555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.803939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.803972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.803991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.804229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.804472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.804496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.804512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.808093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.817584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.817997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.818029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.818047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.818286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.818529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.818554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.818571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.822156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.831428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.831793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.831825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.831843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.832081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.832324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.832349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.832365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.219 [2024-07-21 08:33:09.835952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.219 [2024-07-21 08:33:09.845466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.219 [2024-07-21 08:33:09.845854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.219 [2024-07-21 08:33:09.845885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.219 [2024-07-21 08:33:09.845903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.219 [2024-07-21 08:33:09.846141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.219 [2024-07-21 08:33:09.846383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.219 [2024-07-21 08:33:09.846408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.219 [2024-07-21 08:33:09.846424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.481 [2024-07-21 08:33:09.850007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.481 [2024-07-21 08:33:09.859493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.481 [2024-07-21 08:33:09.859857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-21 08:33:09.859889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.481 [2024-07-21 08:33:09.859907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.481 [2024-07-21 08:33:09.860145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.481 [2024-07-21 08:33:09.860388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.481 [2024-07-21 08:33:09.860413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.481 [2024-07-21 08:33:09.860429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.481 [2024-07-21 08:33:09.864008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.481 [2024-07-21 08:33:09.873487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.481 [2024-07-21 08:33:09.873853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-21 08:33:09.873885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.481 [2024-07-21 08:33:09.873903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.481 [2024-07-21 08:33:09.874140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.481 [2024-07-21 08:33:09.874382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.481 [2024-07-21 08:33:09.874408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.481 [2024-07-21 08:33:09.874424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.481 [2024-07-21 08:33:09.878005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.481 [2024-07-21 08:33:09.887487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.481 [2024-07-21 08:33:09.887855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.481 [2024-07-21 08:33:09.887895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.887914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.888154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.888398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.888424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.888440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.892027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.901511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.901919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.901951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.901969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.902206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.902448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.902473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.902490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.906068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.915543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.915967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.915999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.916017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.916255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.916497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.916522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.916539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.920117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.929389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.929761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.929793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.929811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.930050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.930299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.930324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.930341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.933922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.943401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.943764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.943796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.943815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.944052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.944296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.944321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.944337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.947919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.957394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.957785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.957817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.957835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.958074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.958317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.958343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.958360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.961940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.971419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.971782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.971814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.971832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.972070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.972313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.972337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.972353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.975941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.985429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.985824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.985855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.985874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.986122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:09.986366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:09.986391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:09.986408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:09.989994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:09.999266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:09.999678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:09.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:09.999728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:09.999966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:10.000210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:10.000235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:10.000255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:10.004426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:10.013246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:10.013661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:10.013695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:10.013715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:10.013953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:10.014197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:10.014221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:10.014238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:10.017817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:10.027084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:10.027498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.482 [2024-07-21 08:33:10.027530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.482 [2024-07-21 08:33:10.027554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.482 [2024-07-21 08:33:10.027807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.482 [2024-07-21 08:33:10.028051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.482 [2024-07-21 08:33:10.028076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.482 [2024-07-21 08:33:10.028092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.482 [2024-07-21 08:33:10.031669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.482 [2024-07-21 08:33:10.040939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.482 [2024-07-21 08:33:10.041339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-21 08:33:10.041370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.483 [2024-07-21 08:33:10.041388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.483 [2024-07-21 08:33:10.041636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.483 [2024-07-21 08:33:10.041879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.483 [2024-07-21 08:33:10.041903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.483 [2024-07-21 08:33:10.041919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.483 [2024-07-21 08:33:10.045486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.483 [2024-07-21 08:33:10.054972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.483 [2024-07-21 08:33:10.055377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-21 08:33:10.055408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.483 [2024-07-21 08:33:10.055426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.483 [2024-07-21 08:33:10.055677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.483 [2024-07-21 08:33:10.055921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.483 [2024-07-21 08:33:10.055945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.483 [2024-07-21 08:33:10.055961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.483 [2024-07-21 08:33:10.059528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.483 [2024-07-21 08:33:10.068810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.483 [2024-07-21 08:33:10.069193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-21 08:33:10.069225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.483 [2024-07-21 08:33:10.069244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.483 [2024-07-21 08:33:10.069483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.483 [2024-07-21 08:33:10.069739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.483 [2024-07-21 08:33:10.069770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.483 [2024-07-21 08:33:10.069787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.483 [2024-07-21 08:33:10.073359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.483 [2024-07-21 08:33:10.082844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.483 [2024-07-21 08:33:10.083221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-21 08:33:10.083252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.483 [2024-07-21 08:33:10.083271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.483 [2024-07-21 08:33:10.083509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.483 [2024-07-21 08:33:10.083762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.483 [2024-07-21 08:33:10.083790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.483 [2024-07-21 08:33:10.083806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.483 [2024-07-21 08:33:10.087630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.483 [2024-07-21 08:33:10.096708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.483 [2024-07-21 08:33:10.097089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.483 [2024-07-21 08:33:10.097122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.483 [2024-07-21 08:33:10.097141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.483 [2024-07-21 08:33:10.097379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.483 [2024-07-21 08:33:10.097633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.483 [2024-07-21 08:33:10.097659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.483 [2024-07-21 08:33:10.097675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.483 [2024-07-21 08:33:10.101243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.110733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.111125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.111158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.111177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.111416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.111673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.111699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.111716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.115287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.124768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.125183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.125216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.125234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.125472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.125730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.125757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.125773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.129341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.138598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.138982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.139014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.139031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.139269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.139511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.139537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.139553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.143137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.152620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.153021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.153052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.153070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.153308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.153550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.153575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.153591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.157174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.166657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.167063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.167095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.167113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.167356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.167599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.167636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.167652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.171220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.180482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.180891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.180924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.180942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.181180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.181423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.181448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.181464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.185044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.194312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.194698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.194731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.194749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.194987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.195230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.195255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.195272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.198850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.208323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.208709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.208742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.208760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.208998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.209240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.209265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.209288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.212868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.746 [2024-07-21 08:33:10.222357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.746 [2024-07-21 08:33:10.222745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.746 [2024-07-21 08:33:10.222777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.746 [2024-07-21 08:33:10.222795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.746 [2024-07-21 08:33:10.223033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.746 [2024-07-21 08:33:10.223275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.746 [2024-07-21 08:33:10.223300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.746 [2024-07-21 08:33:10.223316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.746 [2024-07-21 08:33:10.226896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.236367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.236758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.236790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.236808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.237046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.237289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.237314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.237330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.240913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.250379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.250785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.250817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.250836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.251073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.251315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.251340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.251357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.254938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.264406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.264805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.264837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.264855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.265093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.265335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.265360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.265377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.268964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.278556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.278957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.278993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.279013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.279252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.279495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.279520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.279536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.283136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.292319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.292703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.292736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.292753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.292983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.293195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.293216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.293229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.296326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.305694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.306090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.306120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.306137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.306377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.306605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.306637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.306652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.309672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.319225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.319653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.319683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.319700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.319929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.320145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.320167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.320194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.323438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.332463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.332860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.332889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.332921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.333153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.333348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.333369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.333382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.336374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.345802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.346138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.346165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.346181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.346396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.346605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.346650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.346665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.349645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.359068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.359431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.359459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.359475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.359724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.359946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.359982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.359996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:00.747 [2024-07-21 08:33:10.362966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:00.747 [2024-07-21 08:33:10.372540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:00.747 [2024-07-21 08:33:10.372955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.747 [2024-07-21 08:33:10.372984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:00.747 [2024-07-21 08:33:10.373001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:00.747 [2024-07-21 08:33:10.373225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:00.747 [2024-07-21 08:33:10.373433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:00.747 [2024-07-21 08:33:10.373455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:00.747 [2024-07-21 08:33:10.373468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.013 [2024-07-21 08:33:10.376795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.013 [2024-07-21 08:33:10.386096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.013 [2024-07-21 08:33:10.386498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.013 [2024-07-21 08:33:10.386528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.013 [2024-07-21 08:33:10.386546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.013 [2024-07-21 08:33:10.386771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.013 [2024-07-21 08:33:10.387019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.013 [2024-07-21 08:33:10.387041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.013 [2024-07-21 08:33:10.387056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.013 [2024-07-21 08:33:10.390299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.013 [2024-07-21 08:33:10.399846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.013 [2024-07-21 08:33:10.400239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.013 [2024-07-21 08:33:10.400269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.013 [2024-07-21 08:33:10.400292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.013 [2024-07-21 08:33:10.400522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.013 [2024-07-21 08:33:10.400807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.013 [2024-07-21 08:33:10.400831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.013 [2024-07-21 08:33:10.400846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.013 [2024-07-21 08:33:10.403861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.013 [2024-07-21 08:33:10.413089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.013 [2024-07-21 08:33:10.413455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.013 [2024-07-21 08:33:10.413484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.013 [2024-07-21 08:33:10.413500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.013 [2024-07-21 08:33:10.413750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.013 [2024-07-21 08:33:10.413979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.013 [2024-07-21 08:33:10.414000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.013 [2024-07-21 08:33:10.414013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.013 [2024-07-21 08:33:10.416986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.013 [2024-07-21 08:33:10.426393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.013 [2024-07-21 08:33:10.426751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.013 [2024-07-21 08:33:10.426781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.013 [2024-07-21 08:33:10.426798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.013 [2024-07-21 08:33:10.427041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.013 [2024-07-21 08:33:10.427251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.013 [2024-07-21 08:33:10.427273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.013 [2024-07-21 08:33:10.427286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.430267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.439726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.440172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.440201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.440218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.440459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.440695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.440722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.440737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.443713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.452992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.453322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.453349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.453365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.453588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.453827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.453849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.453863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.456874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.466319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.466645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.466673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.466689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.466896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.467122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.467142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.467155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.470132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.479582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.479937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.479979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.479995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.480210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.480420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.480440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.480452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.483361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.492961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.493340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.493370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.493386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.493640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.493866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.493890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.493904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.496876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.506254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.506695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.506724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.506741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.506981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.507191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.507212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.507225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.510177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.519504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.519901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.519944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.519960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.520193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.520401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.520421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.520434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.523409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.532844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.533189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.533217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.533238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.533460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.533712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.533734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.533748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.536839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.546167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.546532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.546560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.546577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.546823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.547033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.547053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.547066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.550055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.559297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.559673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.559702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.559718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.559966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.560175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.014 [2024-07-21 08:33:10.560195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.014 [2024-07-21 08:33:10.560208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.014 [2024-07-21 08:33:10.563345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.014 [2024-07-21 08:33:10.572561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.014 [2024-07-21 08:33:10.572951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.014 [2024-07-21 08:33:10.572980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.014 [2024-07-21 08:33:10.572996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.014 [2024-07-21 08:33:10.573236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.014 [2024-07-21 08:33:10.573444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.573470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.573484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.015 [2024-07-21 08:33:10.576472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.015 [2024-07-21 08:33:10.585890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.015 [2024-07-21 08:33:10.586349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.015 [2024-07-21 08:33:10.586378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.015 [2024-07-21 08:33:10.586395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.015 [2024-07-21 08:33:10.586637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.015 [2024-07-21 08:33:10.586841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.586862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.586875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.015 [2024-07-21 08:33:10.589829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.015 [2024-07-21 08:33:10.599140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.015 [2024-07-21 08:33:10.599576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.015 [2024-07-21 08:33:10.599604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.015 [2024-07-21 08:33:10.599629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.015 [2024-07-21 08:33:10.599872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.015 [2024-07-21 08:33:10.600081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.600101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.600113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.015 [2024-07-21 08:33:10.603106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.015 [2024-07-21 08:33:10.612453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.015 [2024-07-21 08:33:10.612836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.015 [2024-07-21 08:33:10.612865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.015 [2024-07-21 08:33:10.612882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.015 [2024-07-21 08:33:10.613125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.015 [2024-07-21 08:33:10.613318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.613338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.613352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.015 [2024-07-21 08:33:10.616382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.015 [2024-07-21 08:33:10.625821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.015 [2024-07-21 08:33:10.626204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.015 [2024-07-21 08:33:10.626232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.015 [2024-07-21 08:33:10.626248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.015 [2024-07-21 08:33:10.626469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.015 [2024-07-21 08:33:10.626704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.626737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.626750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.015 [2024-07-21 08:33:10.629702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.015 [2024-07-21 08:33:10.639238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.015 [2024-07-21 08:33:10.639605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.015 [2024-07-21 08:33:10.639641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.015 [2024-07-21 08:33:10.639659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.015 [2024-07-21 08:33:10.639900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.015 [2024-07-21 08:33:10.640118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.015 [2024-07-21 08:33:10.640154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.015 [2024-07-21 08:33:10.640167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.643363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.652543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.652877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.652905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.652921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.653137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.653345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.653366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.653379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.656398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.665783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.666172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.666200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.666217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.666461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.666715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.666739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.666754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.669728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.678941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.679305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.679334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.679350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.679589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.679831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.679854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.679868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.682835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.692125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.692445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.692472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.692488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.692733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.692953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.692974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.692987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.695936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.705379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.705786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.705816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.705832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.706083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.706276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.706297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.706315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.709303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.718582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.719065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.719093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.719110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.719362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.719572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.719606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.719629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.722597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.276 [2024-07-21 08:33:10.731871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.276 [2024-07-21 08:33:10.732314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.276 [2024-07-21 08:33:10.732342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.276 [2024-07-21 08:33:10.732357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.276 [2024-07-21 08:33:10.732591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.276 [2024-07-21 08:33:10.732819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.276 [2024-07-21 08:33:10.732842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.276 [2024-07-21 08:33:10.732856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.276 [2024-07-21 08:33:10.735824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.745146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.745512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.745541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.745557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.745823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.746052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.746074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.746087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.749036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.758547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.758914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.758946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.758962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.759190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.759384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.759415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.759429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.762408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.771835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.772219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.772248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.772264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.772503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.772757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.772780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.772793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.775805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.785063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.785508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.785566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.785812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.786024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.786045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.786058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.789005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.798220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.798647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.798676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.798692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.798936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.799150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.799171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.799184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.802147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.811388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.811815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.811845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.811862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.812100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.812308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.812329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.812342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.815411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.824878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.825250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.825279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.825296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.825539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.825763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.825785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.825799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.828768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.838523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.838884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.838913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.838930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.839162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.839368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.839389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.839403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.842817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.851715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.852079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.852107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.852124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.852346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.852556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.852576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.852590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.855599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.864868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.865250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.865279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.865296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.865537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.865790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.865813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.277 [2024-07-21 08:33:10.865827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.277 [2024-07-21 08:33:10.868799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.277 [2024-07-21 08:33:10.878048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.277 [2024-07-21 08:33:10.878434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.277 [2024-07-21 08:33:10.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.277 [2024-07-21 08:33:10.878477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.277 [2024-07-21 08:33:10.878723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.277 [2024-07-21 08:33:10.878944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.277 [2024-07-21 08:33:10.878964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.278 [2024-07-21 08:33:10.878977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.278 [2024-07-21 08:33:10.881943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.278 [2024-07-21 08:33:10.891209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.278 [2024-07-21 08:33:10.891635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.278 [2024-07-21 08:33:10.891664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.278 [2024-07-21 08:33:10.891686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.278 [2024-07-21 08:33:10.891929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.278 [2024-07-21 08:33:10.892139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.278 [2024-07-21 08:33:10.892159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.278 [2024-07-21 08:33:10.892173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.278 [2024-07-21 08:33:10.895164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.278 [2024-07-21 08:33:10.904868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.905302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.905330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.905348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.905562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.905829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.905851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.905865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.908849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.918200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.918629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.918693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.918924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.919152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.919173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.919185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.922211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.931461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.931855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.931883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.931900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.932140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.932348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.932377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.932391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.935387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.944851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.945342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.945371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.945387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.945638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.945865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.945887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.945901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.948937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.958304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.958672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.958702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.958719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.958961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.959155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.959175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.959189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.962224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.971655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.972009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.972036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.972051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.972252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.972479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.972500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.972513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.975814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.984987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.985351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.985379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.985396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.985652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.985875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.985909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.985922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:10.988898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:10.998318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:10.998745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:10.998774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:10.998790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:10.999028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:10.999222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:10.999241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:10.999254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:11.002216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:11.011524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:11.011932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:11.011975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:11.011991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:11.012225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:11.012433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:11.012452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:11.012465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:11.015418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:11.024798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:11.025181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.538 [2024-07-21 08:33:11.025210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.538 [2024-07-21 08:33:11.025225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.538 [2024-07-21 08:33:11.025471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.538 [2024-07-21 08:33:11.025708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.538 [2024-07-21 08:33:11.025729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.538 [2024-07-21 08:33:11.025743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.538 [2024-07-21 08:33:11.028731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.538 [2024-07-21 08:33:11.038054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.538 [2024-07-21 08:33:11.038424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.038452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.038468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.038718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.038975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.039004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.039036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 79863 Killed "${NVMF_APP[@]}" "$@" 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.539 [2024-07-21 08:33:11.042437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=81395 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 81395 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 81395 ']' 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.539 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.539 [2024-07-21 08:33:11.051678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.052129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.052160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.052178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.052425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.052655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.052694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.052708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.055912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.065141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.065522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.065552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.065569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.065811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.066034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.066054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.066068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.069424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.078655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.079086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.079117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.079134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.079372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.079572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.079606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.079631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.082819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.090227] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:37:01.539 [2024-07-21 08:33:11.090288] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.539 [2024-07-21 08:33:11.092086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.092525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.092554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.092570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.092794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.093053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.093074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.093088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.096250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.105446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.105826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.105855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.105872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.106101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.106315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.106335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.106348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.109466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.118841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.119202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.119231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.119247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.119492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.119717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.119739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.119752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.122827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.539 [2024-07-21 08:33:11.132403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.132787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.132816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.539 [2024-07-21 08:33:11.132833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.539 [2024-07-21 08:33:11.133075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.539 [2024-07-21 08:33:11.133291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.539 [2024-07-21 08:33:11.133312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.539 [2024-07-21 08:33:11.133329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.539 [2024-07-21 08:33:11.136501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.539 [2024-07-21 08:33:11.146055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.539 [2024-07-21 08:33:11.146473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.539 [2024-07-21 08:33:11.146501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-07-21 08:33:11.146517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.540 [2024-07-21 08:33:11.146741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.540 [2024-07-21 08:33:11.146972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.540 [2024-07-21 08:33:11.146992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.540 [2024-07-21 08:33:11.147005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.540 [2024-07-21 08:33:11.150043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.540 [2024-07-21 08:33:11.157454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:01.540 [2024-07-21 08:33:11.159484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.540 [2024-07-21 08:33:11.159865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-07-21 08:33:11.159894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-07-21 08:33:11.159910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.540 [2024-07-21 08:33:11.160153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.540 [2024-07-21 08:33:11.160367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.540 [2024-07-21 08:33:11.160387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.540 [2024-07-21 08:33:11.160400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.540 [2024-07-21 08:33:11.163649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.173081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.173568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.173604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-07-21 08:33:11.173632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.801 [2024-07-21 08:33:11.173868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.801 [2024-07-21 08:33:11.174104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.801 [2024-07-21 08:33:11.174125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.801 [2024-07-21 08:33:11.174140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.801 [2024-07-21 08:33:11.177219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.186411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.186842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.186871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-07-21 08:33:11.186888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.801 [2024-07-21 08:33:11.187129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.801 [2024-07-21 08:33:11.187329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.801 [2024-07-21 08:33:11.187349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.801 [2024-07-21 08:33:11.187363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.801 [2024-07-21 08:33:11.190430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.199857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.200280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.200310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-07-21 08:33:11.200327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.801 [2024-07-21 08:33:11.200570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.801 [2024-07-21 08:33:11.200818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.801 [2024-07-21 08:33:11.200841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.801 [2024-07-21 08:33:11.200856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.801 [2024-07-21 08:33:11.203924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.213178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.213753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.213790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-07-21 08:33:11.213809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.801 [2024-07-21 08:33:11.214075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.801 [2024-07-21 08:33:11.214278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.801 [2024-07-21 08:33:11.214299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.801 [2024-07-21 08:33:11.214314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.801 [2024-07-21 08:33:11.217390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.226589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.227069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.227099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-07-21 08:33:11.227116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.801 [2024-07-21 08:33:11.227377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.801 [2024-07-21 08:33:11.227578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.801 [2024-07-21 08:33:11.227598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.801 [2024-07-21 08:33:11.227611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.801 [2024-07-21 08:33:11.230728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.801 [2024-07-21 08:33:11.240037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.801 [2024-07-21 08:33:11.240429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-07-21 08:33:11.240459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.240475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.240717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.240945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.240981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.240995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.244027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.245418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.802 [2024-07-21 08:33:11.245452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.802 [2024-07-21 08:33:11.245465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.802 [2024-07-21 08:33:11.245476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.802 [2024-07-21 08:33:11.245485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.802 [2024-07-21 08:33:11.245542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:01.802 [2024-07-21 08:33:11.245600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:01.802 [2024-07-21 08:33:11.245602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.802 [2024-07-21 08:33:11.253599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.254125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.254164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.254183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.254436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.254675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.254699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.254715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.257957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.267126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.267658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.267696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.267716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.267955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.268180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.268201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.268218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.271392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.280636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.281189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.281227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.281247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.281497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.281736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.281760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.281777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.284956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.294235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.294743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.294785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.294805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.295060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.295269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.295290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.295307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.298476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.307870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.308367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.308404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.308423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.308707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.308924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.308947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.308962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.312156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.321340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.321817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.321854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.321873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.322126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.322351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.322373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.322389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.325722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.334985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.335448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.335478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.335496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.335734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.335962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.335985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.335998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.339172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 [2024-07-21 08:33:11.348524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.348921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.348950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.348967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.349182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.349409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.349432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.349454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.802 [2024-07-21 08:33:11.352710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.802 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:01.802 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:01.802 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:01.802 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:01.802 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.802 [2024-07-21 08:33:11.362070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.802 [2024-07-21 08:33:11.362476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-07-21 08:33:11.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-07-21 08:33:11.362522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.802 [2024-07-21 08:33:11.362746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.802 [2024-07-21 08:33:11.363003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.802 [2024-07-21 08:33:11.363024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.802 [2024-07-21 08:33:11.363038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.803 [2024-07-21 08:33:11.366229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.803 [2024-07-21 08:33:11.375482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.803 [2024-07-21 08:33:11.375854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.803 [2024-07-21 08:33:11.375882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.803 [2024-07-21 08:33:11.375899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.803 [2024-07-21 08:33:11.376128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.803 [2024-07-21 08:33:11.376341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.803 [2024-07-21 08:33:11.376362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.803 [2024-07-21 08:33:11.376376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.803 [2024-07-21 08:33:11.379608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.803 [2024-07-21 08:33:11.387909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.803 [2024-07-21 08:33:11.389054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.803 [2024-07-21 08:33:11.389373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.803 [2024-07-21 08:33:11.389402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.803 [2024-07-21 08:33:11.389419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.803 [2024-07-21 08:33:11.389665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.803 [2024-07-21 08:33:11.389884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.803 [2024-07-21 08:33:11.389930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.803 [2024-07-21 08:33:11.389945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.803 [2024-07-21 08:33:11.393099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.803 [2024-07-21 08:33:11.402570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.803 [2024-07-21 08:33:11.402958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.803 [2024-07-21 08:33:11.402987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.803 [2024-07-21 08:33:11.403003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.803 [2024-07-21 08:33:11.403246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.803 [2024-07-21 08:33:11.403445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.803 [2024-07-21 08:33:11.403466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.803 [2024-07-21 08:33:11.403479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.803 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.803 [2024-07-21 08:33:11.407069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.803 [2024-07-21 08:33:11.416296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:01.803 [2024-07-21 08:33:11.416829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.803 [2024-07-21 08:33:11.416867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:01.803 [2024-07-21 08:33:11.416887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:01.803 [2024-07-21 08:33:11.417140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:01.803 [2024-07-21 08:33:11.417348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:01.803 [2024-07-21 08:33:11.417370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:01.803 [2024-07-21 08:33:11.417386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:01.803 [2024-07-21 08:33:11.420653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:01.803 Malloc0 00:37:02.063 [2024-07-21 08:33:11.429987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.063 [2024-07-21 08:33:11.430483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-07-21 08:33:11.430528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-07-21 08:33:11.430558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:02.063 [2024-07-21 08:33:11.430861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.063 [2024-07-21 08:33:11.431178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:02.063 [2024-07-21 08:33:11.431213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:02.063 [2024-07-21 08:33:11.431255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:02.063 [2024-07-21 08:33:11.434551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.063 [2024-07-21 08:33:11.443455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:02.063 [2024-07-21 08:33:11.443846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-07-21 08:33:11.443878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0ff70 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-07-21 08:33:11.443896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0ff70 is same with the state(5) to be set 00:37:02.063 [2024-07-21 08:33:11.444125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0ff70 (9): Bad file descriptor 00:37:02.063 [2024-07-21 08:33:11.444346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:02.063 [2024-07-21 08:33:11.444368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:02.063 [2024-07-21 08:33:11.444382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.063 [2024-07-21 08:33:11.447912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:02.063 [2024-07-21 08:33:11.449575] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.063 08:33:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 80148 00:37:02.063 [2024-07-21 08:33:11.457149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:02.063 [2024-07-21 08:33:11.617580] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:12.044 00:37:12.044 Latency(us) 00:37:12.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:12.044 Verification LBA range: start 0x0 length 0x4000 00:37:12.044 Nvme1n1 : 15.00 6352.25 24.81 9299.93 0.00 8153.36 813.13 22039.51 00:37:12.044 =================================================================================================================== 00:37:12.044 Total : 6352.25 24.81 9299.93 0.00 8153.36 813.13 22039.51 00:37:12.044 08:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:12.044 08:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:12.045 rmmod nvme_tcp 00:37:12.045 rmmod nvme_fabrics 00:37:12.045 rmmod nvme_keyring 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 81395 ']' 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 81395 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 81395 ']' 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 81395 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81395 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81395' 00:37:12.045 killing process with pid 81395 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 81395 00:37:12.045 08:33:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 81395 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:12.045 08:33:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.949 08:33:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:13.949 00:37:13.949 real 0m22.474s 00:37:13.949 user 0m59.575s 00:37:13.949 sys 0m4.583s 00:37:13.949 08:33:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:13.949 08:33:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.949 ************************************ 00:37:13.949 END TEST nvmf_bdevperf 00:37:13.949 ************************************ 00:37:13.949 08:33:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:13.949 08:33:23 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:13.949 08:33:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:13.949 08:33:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:13.949 08:33:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:13.949 ************************************ 00:37:13.949 START TEST nvmf_target_disconnect 00:37:13.949 ************************************ 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:13.949 * Looking for test storage... 00:37:13.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:13.949 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:13.950 08:33:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:15.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:15.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:15.852 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:15.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:15.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:15.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:37:15.853 00:37:15.853 --- 10.0.0.2 ping statistics --- 00:37:15.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.853 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:37:15.853 00:37:15.853 --- 10.0.0.1 ping statistics --- 00:37:15.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.853 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:15.853 ************************************ 00:37:15.853 START TEST nvmf_target_disconnect_tc1 00:37:15.853 ************************************ 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:15.853 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:15.854 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.854 [2024-07-21 08:33:25.468959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.854 [2024-07-21 08:33:25.469032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197d590 with addr=10.0.0.2, port=4420 00:37:15.854 [2024-07-21 08:33:25.469064] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:15.854 [2024-07-21 08:33:25.469087] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:15.854 [2024-07-21 08:33:25.469101] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:15.854 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:15.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:15.854 Initializing NVMe Controllers 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.854 00:37:15.854 real 0m0.097s 00:37:15.854 user 0m0.034s 00:37:15.854 sys 0m0.059s 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:15.854 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:15.854 ************************************ 00:37:15.854 END TEST nvmf_target_disconnect_tc1 00:37:15.854 ************************************ 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:16.111 ************************************ 00:37:16.111 START TEST nvmf_target_disconnect_tc2 00:37:16.111 ************************************ 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=84470 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 84470 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 84470 ']' 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.111 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.111 [2024-07-21 08:33:25.584761] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:37:16.111 [2024-07-21 08:33:25.584855] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.111 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.111 [2024-07-21 08:33:25.647334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:16.111 [2024-07-21 08:33:25.734005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.111 [2024-07-21 08:33:25.734059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.111 [2024-07-21 08:33:25.734082] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.111 [2024-07-21 08:33:25.734093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.111 [2024-07-21 08:33:25.734103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.111 [2024-07-21 08:33:25.734245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:16.111 [2024-07-21 08:33:25.734310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:16.111 [2024-07-21 08:33:25.734376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:16.111 [2024-07-21 08:33:25.734378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:16.368 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.368 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 Malloc0 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 [2024-07-21 08:33:25.902390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 [2024-07-21 08:33:25.930661] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=84608 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:16.369 08:33:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:16.369 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.923 08:33:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 84470 00:37:18.923 08:33:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 [2024-07-21 08:33:27.955233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Write completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 [2024-07-21 08:33:27.955566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.923 starting I/O failed 00:37:18.923 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Read completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 Write completed with error (sct=0, sc=8) 00:37:18.924 starting I/O failed 00:37:18.924 [2024-07-21 08:33:27.955867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:18.924 [2024-07-21 08:33:27.956121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.956272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.956466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.956629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.956792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.956963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.956989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.957143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.957300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.957456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.957636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.957814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.957963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.958856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.958990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.959894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.959926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.960907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.960933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.924 [2024-07-21 08:33:27.961031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.924 [2024-07-21 08:33:27.961059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.924 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.961173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.961198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.961334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.961375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.961482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.961523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.961725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.961752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.961854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.961880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.962833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.962860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.963020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.963047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.963272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.963302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.963482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.963509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.963654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.963682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.963809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.963837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.964870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.964999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.965025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.965124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.965150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.965248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.965276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Read completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 Write completed with error (sct=0, sc=8) 00:37:18.925 starting I/O failed 00:37:18.925 [2024-07-21 08:33:27.965670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:18.925 [2024-07-21 08:33:27.965797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.965836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.965948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.965977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.966216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.966242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.966345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.966372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.925 [2024-07-21 08:33:27.966485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.925 [2024-07-21 08:33:27.966525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.925 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.966662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.966690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.966799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.966924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.966951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.967894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.967923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.968927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.969952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.969978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.970891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.970918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.971902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.971932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.926 qpair failed and we were unable to recover it. 00:37:18.926 [2024-07-21 08:33:27.972885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.926 [2024-07-21 08:33:27.972914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.973930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.973964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.974910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.974936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.975851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.975877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.976878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.927 qpair failed and we were unable to recover it. 00:37:18.927 [2024-07-21 08:33:27.977841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.927 [2024-07-21 08:33:27.977867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.977992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.978825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.978980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.979160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.979338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.979529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.979706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.979886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.979912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.980886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.980912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.981887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.981913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.982849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.982877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.983970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.983996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.984145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.984175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.984294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.928 [2024-07-21 08:33:27.984341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.928 qpair failed and we were unable to recover it. 00:37:18.928 [2024-07-21 08:33:27.984494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.984521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.984648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.984674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.984826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.984853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.984988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.985923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.985953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.986932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.986974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.987890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.987988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.988915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.988959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.929 qpair failed and we were unable to recover it. 00:37:18.929 [2024-07-21 08:33:27.989825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.929 [2024-07-21 08:33:27.989851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.989956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.989982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.990937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.990981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.991959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.991986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.992955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.992992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.993904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.993929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.994904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.994931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.995847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.995873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.996043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.996070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.996192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.996219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.930 [2024-07-21 08:33:27.996361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.930 [2024-07-21 08:33:27.996398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.930 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.996545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.996584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.996736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.996781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.996903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.996932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.997056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.997083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.997238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.997263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.997368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.997396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.997571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.997627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.997789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.997820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.998837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.998976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:27.999914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:27.999941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.000921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.000948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.001918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.001944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.002075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.002101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.931 [2024-07-21 08:33:28.002240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.931 [2024-07-21 08:33:28.002266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.931 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.002395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.002422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.002531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.002569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.002698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.002728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.002828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.002855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.002967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.003157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.003338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.003469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.003645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.003827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.004920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.004965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.005137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.005185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.005361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.005406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.005540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.005567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.005722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.005768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.005911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.005941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.006922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.006952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.007907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.007933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.932 [2024-07-21 08:33:28.008087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.932 [2024-07-21 08:33:28.008138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.932 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.008298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.008327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.008470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.008498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.008633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.008660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.008815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.008841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.008982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.009276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.009445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.009642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.009770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.009904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.009955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.010872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.010926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.011914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.011943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.012083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.012113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.012294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.012321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.012437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.012477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.012635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.012676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.012789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.012818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.013905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.013957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.014119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.014162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.933 qpair failed and we were unable to recover it. 00:37:18.933 [2024-07-21 08:33:28.014313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.933 [2024-07-21 08:33:28.014342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.014517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.014662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.014690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.014812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.014842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.014987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.015837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.015986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.016982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.017187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.017325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.017529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.017726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.017851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.017878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.018866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.018979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.019170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.019356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.019531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.019675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.019860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.019886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.020897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.020928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.021117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.934 [2024-07-21 08:33:28.021161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.934 qpair failed and we were unable to recover it. 00:37:18.934 [2024-07-21 08:33:28.021310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.021353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.021491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.021518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.021655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.021682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.021812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.021839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.021985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.022939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.022977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.023909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.023948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.024920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.025966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.025994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.026933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.935 [2024-07-21 08:33:28.026959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.935 qpair failed and we were unable to recover it. 00:37:18.935 [2024-07-21 08:33:28.027084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.027212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.027370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.027539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.027703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.027885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.027922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.028050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.028078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.028832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.028863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.029970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.029996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.030177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.030335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.030491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.030644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.030779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.030963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.031935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.031962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.032910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.032951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.033096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.033123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.033286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.936 [2024-07-21 08:33:28.033313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.936 qpair failed and we were unable to recover it. 00:37:18.936 [2024-07-21 08:33:28.033446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.033472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.033628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.033657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.033809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.033839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.033963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.033991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.034125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.034153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.034283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.034311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.034465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.034491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.034597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.034635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.034784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.034830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.035888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.036833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.036859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.037011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.037038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.037171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.037198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.037970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.038001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.038135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.038163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.039077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.039107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.039252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.039279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.039998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.040028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.040174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.040202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.041970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.042158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.042185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.042280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.042307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.042437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.937 [2024-07-21 08:33:28.042464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.937 qpair failed and we were unable to recover it. 00:37:18.937 [2024-07-21 08:33:28.042610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.042644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.042746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.042773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.042872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.042909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.042998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.043836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.043972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.044950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.044987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.045911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.045937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.046113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.046142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.046256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.046286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.046515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.046543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.046718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.046745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.046878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.046931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.047094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.047284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.047450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.047648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.047779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.047994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.048195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.048360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.048537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.048716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.048866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.048893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.049159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.049220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.049338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.049382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.049627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.938 [2024-07-21 08:33:28.049654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.938 qpair failed and we were unable to recover it. 00:37:18.938 [2024-07-21 08:33:28.049781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.049806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.049942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.049982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.050178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.050228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.050373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.050416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.050543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.050573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.050743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.050790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.050915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.050958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.051917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.051943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.052059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.052087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.052326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.052353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.052484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.052511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.052663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.052692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.052838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.052885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.053057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.053102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.053254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.053279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.053377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.053403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.053557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.053583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.053800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.053843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.054041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.054258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.054432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.054585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.054794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.054977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.055184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.055368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.055526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.055694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.055867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.055896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.056075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.056119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.056292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.056336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.056467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.056492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.056660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.056689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.056857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.056904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.057057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.057102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.057251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.939 [2024-07-21 08:33:28.057295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.939 qpair failed and we were unable to recover it. 00:37:18.939 [2024-07-21 08:33:28.057448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.057474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.057600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.057644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.057786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.057830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.057990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.058822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.058991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.059864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.059996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.060968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.060995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.061969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.061996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.062152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.062306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.062468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.062704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.062984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.063861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.063972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.064000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.064170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.940 [2024-07-21 08:33:28.064199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.940 qpair failed and we were unable to recover it. 00:37:18.940 [2024-07-21 08:33:28.064350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.064375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.064506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.064531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.064683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.064709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.064861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.064890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.065110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.065139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.065303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.065332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.065495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.065524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.065671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.065827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.065854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.066826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.066852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.067057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.067085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.067299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.067332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.067542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.067571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.067756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.067782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.067964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.067990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.068954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.068980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.069930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.069955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.070103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.070132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.070340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.070369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.070514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.070543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.941 [2024-07-21 08:33:28.070677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.941 [2024-07-21 08:33:28.070703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.941 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.070832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.070858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.070995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.071177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.071372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.071545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.071720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.071864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.071889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.072074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.072107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.072296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.072324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.072554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.072582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.072744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.072770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.072891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.072919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.073909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.073934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.074093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.074122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.074289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.074319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.074455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.074492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.074654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.074693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.074813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.074840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.075834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.075861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.076059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.076106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.076286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.076313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.076477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.076504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.076652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.076679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.076833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.076859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.077871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.077916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.942 [2024-07-21 08:33:28.078101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.942 [2024-07-21 08:33:28.078143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.942 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.078268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.078294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.078419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.078446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.078565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.078591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.078780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.078810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.078928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.078956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.079935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.079965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.080930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.080959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.081911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.081939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.082151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.082180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.082347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.082376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.082496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.082524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.082734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.082760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.082878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.082920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.083045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.083086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.083235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.083264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.083441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.083471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.083679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.083705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.083859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.083885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.084930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.084959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.085124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.085152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.085285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.943 [2024-07-21 08:33:28.085313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.943 qpair failed and we were unable to recover it. 00:37:18.943 [2024-07-21 08:33:28.085451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.085640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.085683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.085823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.085848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.085955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.085997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.086143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.086171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.086304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.086347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.086525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.086554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.086720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.086746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.086851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.086878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.087917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.087943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.088959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.088988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.089148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.089177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.089303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.089332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.089448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.089477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.089659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.089686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.089836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.089864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.090909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.090937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.091965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.091993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.092156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.944 [2024-07-21 08:33:28.092198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.944 qpair failed and we were unable to recover it. 00:37:18.944 [2024-07-21 08:33:28.092363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.092388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.092515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.092541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.092694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.092723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.092865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.092892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.093840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.093866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.094916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.094942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.095890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.096962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.096991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.097169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.097199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.097336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.097364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.097472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.097499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.097678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.097705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.097840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.097865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.098006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.945 [2024-07-21 08:33:28.098032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.945 qpair failed and we were unable to recover it. 00:37:18.945 [2024-07-21 08:33:28.098174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.098200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.098377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.098406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.098556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.098585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.098725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.098752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.098859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.098886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.099893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.099922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.100080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.100114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.100270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.100300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.100469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.100495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.100668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.100697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.100832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.100862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.101950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.101976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.102896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.102922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.103965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.103991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.104086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.104112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.104237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.104262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.104420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.104445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.104620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.104649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.104871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.946 [2024-07-21 08:33:28.104900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.946 qpair failed and we were unable to recover it. 00:37:18.946 [2024-07-21 08:33:28.105136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.105186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.105327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.105353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.105457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.105483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.105645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.105690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.105859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.105887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.106873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.106901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.107905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.107930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.108949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.108977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.109850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.109893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.110086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.110283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.110436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.110634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.110815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.110982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.111176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.111360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.111541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.111739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.111919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.111948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.112108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.112137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.947 [2024-07-21 08:33:28.112298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.947 [2024-07-21 08:33:28.112326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.947 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.112497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.112523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.112658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.112702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.112870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.112898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.113968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.113995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.114848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.114887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.115935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.115961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.116845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.116871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.117908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.117933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.948 qpair failed and we were unable to recover it. 00:37:18.948 [2024-07-21 08:33:28.118864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.948 [2024-07-21 08:33:28.118890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.119893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.119919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.120099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.120331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.120492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.120698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.120993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.121922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.121949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.122886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.122913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.123866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.123980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.124007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.124111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.124137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.124261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.124451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.124480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.949 [2024-07-21 08:33:28.124633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.949 [2024-07-21 08:33:28.124660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.949 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.124763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.124790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.124898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.124925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.125873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.125900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.126907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.126936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.127906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.127931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.128960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.128988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.129885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.129911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.130914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.130943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.131207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.131260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.950 qpair failed and we were unable to recover it. 00:37:18.950 [2024-07-21 08:33:28.131516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.950 [2024-07-21 08:33:28.131567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.131749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.131779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.131984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.132928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.132955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.133857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.133986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.134191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.134347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.134517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.134692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.134859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.134885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.135875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.135901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.136877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.136903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.951 [2024-07-21 08:33:28.137832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:18.951 qpair failed and we were unable to recover it. 00:37:18.951 [2024-07-21 08:33:28.137982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.138860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.138994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.139961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.139986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.140944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.140970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.141970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.141996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.142957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.142985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.143920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.143965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.144089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.144116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.144269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.144295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.952 [2024-07-21 08:33:28.144483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.952 [2024-07-21 08:33:28.144512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.952 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.144670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.144830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.144856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.145939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.145966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.146129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.146298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.146489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.146684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.146821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.146983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.147966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.147992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.148119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.148145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.148351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.148378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.148544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.148570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.148737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.148895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.148922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.149904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.149930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.150923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.150967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.151099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.151125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.151278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.151308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.953 qpair failed and we were unable to recover it. 00:37:18.953 [2024-07-21 08:33:28.151482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.953 [2024-07-21 08:33:28.151511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.151626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.151669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.151796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.151822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.151946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.151989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.152134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.152164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.152306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.152332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.152487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.152513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.152710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.152737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.152860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.152886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.153930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.153956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.154167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.154196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.154342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.154371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.154487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.154513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.154686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.154729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.154913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.154940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.155966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.155993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.156144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.156188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.156300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.156344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.156468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.156494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.156720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.156749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.156852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.156882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.157027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.157054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.157206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.157232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.157353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.954 [2024-07-21 08:33:28.157379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.954 qpair failed and we were unable to recover it. 00:37:18.954 [2024-07-21 08:33:28.157509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.157535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.157636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.157766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.157792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.157964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.157990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.158965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.158992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.159918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.159945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.160973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.160999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.161877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.161903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.162915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.162941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.163097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.163247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.163399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.163552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.955 [2024-07-21 08:33:28.163736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.955 qpair failed and we were unable to recover it. 00:37:18.955 [2024-07-21 08:33:28.163886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.163912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.164823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.164989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.165951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.165978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.166973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.166999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.167918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.167944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.168927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.168953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.169833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.169860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.170003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.170029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.956 [2024-07-21 08:33:28.170161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.956 [2024-07-21 08:33:28.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.956 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.170286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.170312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.170410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.170436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.170567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.170597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.170727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.170754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.170878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.170904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.171966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.171993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.172926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.172953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.173842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.173868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.174930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.174956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.175924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.175950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.176054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.176080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.176206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.176233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.176367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.176393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.957 qpair failed and we were unable to recover it. 00:37:18.957 [2024-07-21 08:33:28.176485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.957 [2024-07-21 08:33:28.176511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.176674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.176701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.176842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.176872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.176995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.177900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.178877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.178986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.179911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.179938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.180840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.180866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.181871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.958 [2024-07-21 08:33:28.181901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.958 qpair failed and we were unable to recover it. 00:37:18.958 [2024-07-21 08:33:28.182061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.182252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.182450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.182650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.182781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.182913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.182939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.183091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.183121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.183271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.183304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.183422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.183465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.183636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.183845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.183874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.184973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.184999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.185908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.185934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.186910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.186935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.187916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.187942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.188043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.188069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.188189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.188215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.959 [2024-07-21 08:33:28.188305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.959 [2024-07-21 08:33:28.188331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.959 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.188487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.188513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.188617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.188645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.188773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.188799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.188955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.188981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.189954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.189979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.190888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.190915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.191926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.191952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.192901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.192997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.193918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.193945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.194105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.194131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.194259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.194285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.194435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.194461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.194586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.960 [2024-07-21 08:33:28.194617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.960 qpair failed and we were unable to recover it. 00:37:18.960 [2024-07-21 08:33:28.194751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.194777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.194877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.194903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.195818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.195974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.196869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.196895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.197867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.197893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.198896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.198921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.199950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.199976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.200130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.200263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.200423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.200564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.961 [2024-07-21 08:33:28.200688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.961 qpair failed and we were unable to recover it. 00:37:18.961 [2024-07-21 08:33:28.200811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.200961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.200988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.201897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.201924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.202901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.202928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.203926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.203953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.204866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.204996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.205916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.205943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.206073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.962 [2024-07-21 08:33:28.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.962 qpair failed and we were unable to recover it. 00:37:18.962 [2024-07-21 08:33:28.206199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.206225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.206379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.206405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.206560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.206586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.206689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.206715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.206851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.206878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.207939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.207966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.208873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.208993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.209843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.209869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.210951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.210979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.211911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.211938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.212094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.212121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.212211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.212237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.212385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.963 [2024-07-21 08:33:28.212412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.963 qpair failed and we were unable to recover it. 00:37:18.963 [2024-07-21 08:33:28.212563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.212590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.212692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.212719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.212825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.212853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.212976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.213870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.213974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.214906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.214933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.215857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.215981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.216848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.216982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.217858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.217985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.218137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.218323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.218481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.218641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.964 [2024-07-21 08:33:28.218816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.964 [2024-07-21 08:33:28.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.964 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.218938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.218964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.219917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.219943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.220893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.220920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.221918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.221945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.222893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.222981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.223869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.223895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.965 qpair failed and we were unable to recover it. 00:37:18.965 [2024-07-21 08:33:28.224902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.965 [2024-07-21 08:33:28.224927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.225892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.225989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.226926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.226952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.227960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.227987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.228941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.228967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.229848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.229978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.230005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.230120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.230146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.230247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.230277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.966 [2024-07-21 08:33:28.230403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.966 [2024-07-21 08:33:28.230429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.966 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.230533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.230559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.230691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.230718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.230839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.230865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.230970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.230996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.231955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.231983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.232926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.232952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.233844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.233976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.234832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.234982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.235011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.967 qpair failed and we were unable to recover it. 00:37:18.967 [2024-07-21 08:33:28.235159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.967 [2024-07-21 08:33:28.235184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.235287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.235313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.235429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.235459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.235591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.235621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.235758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.235784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.235974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.236958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.236984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.237916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.237945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.238121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.238492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.238681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.238834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.238995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.239025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.239176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.239202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.239334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.239360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.239517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.968 [2024-07-21 08:33:28.239561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.968 qpair failed and we were unable to recover it. 00:37:18.968 [2024-07-21 08:33:28.239711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.239738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.239833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.239859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.240793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.241902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.241927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.242944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.242970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.243922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.243947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.244940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.244968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.245207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.245262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.245392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.245418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.245523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.245548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.245705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.245731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.245929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.969 [2024-07-21 08:33:28.245955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.969 qpair failed and we were unable to recover it. 00:37:18.969 [2024-07-21 08:33:28.246077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.246256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.246451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.246634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.246780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.246909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.246936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.247086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.247307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.247457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.247609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.247775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.247976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.248939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.248966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.249960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.249990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.250923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.250948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.251949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.251974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.252067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.252093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.252194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.252219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.252344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.252370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.970 [2024-07-21 08:33:28.252495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.970 [2024-07-21 08:33:28.252538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.970 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.252690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.252719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.252869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.252895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.253861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.253887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.254915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.254941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.255871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.255897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.256974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.256999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.257135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.257160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.257337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.257367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.257514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.257540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.257665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.257691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.257829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.257859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.258005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.258030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.258133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.258158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.971 qpair failed and we were unable to recover it. 00:37:18.971 [2024-07-21 08:33:28.258338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.971 [2024-07-21 08:33:28.258367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.258514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.258539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.258676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.258702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.258828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.258870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.259905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.259932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.260849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.260874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.261897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.261923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.262935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.262962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.972 [2024-07-21 08:33:28.263958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.972 [2024-07-21 08:33:28.263984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.972 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.264968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.264993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.265925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.265952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.266921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.266948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.267927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.267953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.268121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.268326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.268504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.268666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.268849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.268977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.269165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.269354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.269553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.269703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.269905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.269934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.270076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.270105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.270235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.270260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.270394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.270420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.270645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.270686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.973 qpair failed and we were unable to recover it. 00:37:18.973 [2024-07-21 08:33:28.270810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.973 [2024-07-21 08:33:28.270835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.271913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.271939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.272913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.272957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.273898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.273942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.274946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.274990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.275152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.275351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.275545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.275744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.275866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.275976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.276879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.276926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.277061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.277089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.277215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.974 qpair failed and we were unable to recover it. 00:37:18.974 [2024-07-21 08:33:28.277377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.974 [2024-07-21 08:33:28.277402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.277552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.277582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.277737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.277764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.277891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.277916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.278830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.278856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.279903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.279928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.280930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.280955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.281968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.281993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.282177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.282345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.282498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.282698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.282869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.282996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.283202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.283348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.283467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.283636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.975 qpair failed and we were unable to recover it. 00:37:18.975 [2024-07-21 08:33:28.283795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.975 [2024-07-21 08:33:28.283820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.283943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.283983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.284119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.284148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.284272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.284298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.284454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.284496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.284675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.284704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.284880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.284906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.285973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.285998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.286800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.286826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.287840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.287992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.288832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.288988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.976 [2024-07-21 08:33:28.289029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.976 qpair failed and we were unable to recover it. 00:37:18.976 [2024-07-21 08:33:28.289174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.289351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.289531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.289704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.289831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.289954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.289980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.290853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.290881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.291899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.291925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.292888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.292918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.293898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.293923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.294954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.294983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.295135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.295162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.295291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.295317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.295443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.295469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.977 qpair failed and we were unable to recover it. 00:37:18.977 [2024-07-21 08:33:28.295568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.977 [2024-07-21 08:33:28.295594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.295741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.295768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.295899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.295926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.296967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.296996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.297928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.297954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.298955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.298996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.299149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.299175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.299329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.299355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.299498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.299526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.299691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.299720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.299870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.299899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.300862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.300905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.301069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.301098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.301267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.301296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.301492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.301521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.301657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.301684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.301816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.301842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.302009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.302037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.302191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.978 [2024-07-21 08:33:28.302218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.978 qpair failed and we were unable to recover it. 00:37:18.978 [2024-07-21 08:33:28.302321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.302347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.302498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.302524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.302660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.302687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.302781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.302807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.302942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.302968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.303950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.303981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.304122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.304148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.304276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.304302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.304492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.304519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.304676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.304702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.304859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.304884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.305930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.305958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.306157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.306355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.306555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.306704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.306839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.306992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.307183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.307379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.307590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.307748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.307897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.307922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.308099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.308290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.308444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.308627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.308806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.308986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.309014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.309129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.979 [2024-07-21 08:33:28.309156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.979 qpair failed and we were unable to recover it. 00:37:18.979 [2024-07-21 08:33:28.309292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.309317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.309467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.309495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.309637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.309667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.309807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.309832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.309956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.309982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.310963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.310994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.311137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.311164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.311297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.311322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.311481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.311524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.311656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.311686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.311831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.311857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.312052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.312259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.312420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.312598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.312805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.312973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.313944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.313970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.314135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.314177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.314315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.314343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.314515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.314544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.314690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.314717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.314885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.314911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.315039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.315067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.980 [2024-07-21 08:33:28.315240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.980 [2024-07-21 08:33:28.315270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.980 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.315397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.315440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.315583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.315611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.315764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.315791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.315894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.315921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.316903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.316946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.317141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.317349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.317501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.317673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.317845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.317995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.318147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.318297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.318455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.318637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.318841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.318867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.319879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.319905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.320918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.320944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.321872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.321901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.322017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.981 [2024-07-21 08:33:28.322062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.981 qpair failed and we were unable to recover it. 00:37:18.981 [2024-07-21 08:33:28.322214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.322335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.322467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.322592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.322753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.322881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.322907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.323927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.323956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.324100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.324126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.324258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.324285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.324484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.324511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.324646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.324690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.324817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.324844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.325932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.326953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.326978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.327960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.327988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.328165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.328192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.328322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.328348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.328521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.328550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.328699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.328729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.328884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.328919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.982 qpair failed and we were unable to recover it. 00:37:18.982 [2024-07-21 08:33:28.329078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.982 [2024-07-21 08:33:28.329103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.329243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.329397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.329556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.329722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.329845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.329973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.330143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.330316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.330474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.330694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.330851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.330877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.331907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.331936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.332957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.332986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.333942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.333968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.334133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.334161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.334304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.334334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.334514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.334540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.334712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.334743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.334907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.334936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.983 qpair failed and we were unable to recover it. 00:37:18.983 [2024-07-21 08:33:28.335907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.983 [2024-07-21 08:33:28.335933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.336109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.336138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.336304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.336333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.336510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.336537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.336661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.336688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.336837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.336863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.337832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.338887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.338913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.339922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.339951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.340830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.340871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.341840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.341869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.342062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.342234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.342362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.342536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.984 [2024-07-21 08:33:28.342711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.984 qpair failed and we were unable to recover it. 00:37:18.984 [2024-07-21 08:33:28.342842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.342867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.342967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.342992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.343138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.343167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.343337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.343362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.343533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.343561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.343696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.343722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.343846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.343871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.344816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.344973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.345860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.345977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.346153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.346344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.346526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.346749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.346896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.346938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.347859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.347884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.348073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.348100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.348268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.348297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.348404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.348432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.348562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.348590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.985 qpair failed and we were unable to recover it. 00:37:18.985 [2024-07-21 08:33:28.348789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.985 [2024-07-21 08:33:28.348828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.348972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.349968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.349996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.350193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.350240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.350349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.350377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.350529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.350556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.350726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.350771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.350947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.350994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.351946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.351975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.352946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.352975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.353962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.353992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.354171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.354198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.354419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.354448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.354591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.354625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.354772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.354799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.354948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.354977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.355172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.355335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.355507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.355690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.986 [2024-07-21 08:33:28.355810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.986 qpair failed and we were unable to recover it. 00:37:18.986 [2024-07-21 08:33:28.355959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.356167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.356307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.356502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.356711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.356834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.356859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.357913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.357938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.358925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.358958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.359966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.359992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.360952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.360980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.361150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.361179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.361340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.361369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.361535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.361563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.361691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.361718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.361853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.361879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.362230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.362379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.362531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.987 [2024-07-21 08:33:28.362717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.987 qpair failed and we were unable to recover it. 00:37:18.987 [2024-07-21 08:33:28.362844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.362869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.363970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.363997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.364937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.364963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.365143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.365171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.365310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.365339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.365538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.365567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.365758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.365784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.365892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.365918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.366833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.366858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.367862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.367981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.368936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.368962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.369063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.369088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.369217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.369243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.369348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.369374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.988 [2024-07-21 08:33:28.369527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.988 [2024-07-21 08:33:28.369570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.988 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.369725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.369750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.369877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.369908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.370885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.370915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.371934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.371960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.372944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.372987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.373164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.373193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.373366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.373391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.373558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.373587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.373741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.373899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.373925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.374849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.374876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.375055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.375085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.375237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.375263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.989 [2024-07-21 08:33:28.375389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.989 [2024-07-21 08:33:28.375414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.989 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.375568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.375594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.375728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.375754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.375916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.375945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.376912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.376938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.377116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.377328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.377506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.377639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.377797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.377979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.378840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.378989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.379886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.379915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.380904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.380931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.381888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.381987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.990 [2024-07-21 08:33:28.382012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.990 qpair failed and we were unable to recover it. 00:37:18.990 [2024-07-21 08:33:28.382132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.382284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.382442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.382657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.382787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.382944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.382970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.383146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.383317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.383514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.383693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.383842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.383977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.384924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.384953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.385902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.385929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.386889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.386915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.387900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.387927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.388051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.388077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.388224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.388253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.388363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.388392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.388518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.388560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.991 qpair failed and we were unable to recover it. 00:37:18.991 [2024-07-21 08:33:28.388732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.991 [2024-07-21 08:33:28.388759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.388847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.388872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.389898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.389999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.390953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.390981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.391954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.391980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.392164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.392527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.392650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.392810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.392989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.393166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.393323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.393498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.393710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.393862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.393889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.394868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.394896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.395063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.395092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.395272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.395299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.395449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.992 [2024-07-21 08:33:28.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.992 qpair failed and we were unable to recover it. 00:37:18.992 [2024-07-21 08:33:28.395597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.395632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.395778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.395807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.395925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.395950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.396866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.396909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.397868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.397894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.398904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.398933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.399955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.399981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.400162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.400361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.400558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.400737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.400989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.401015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.401163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.401192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.993 [2024-07-21 08:33:28.401368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.993 [2024-07-21 08:33:28.401394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.993 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.401497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.401522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.401698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.401728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.401893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.402935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.402961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.403878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.403903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.404902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.404929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.405085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.405296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.405468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.405626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.405808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.405989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.406167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.406326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.406505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.406647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.406815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.406840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.407872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.407902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.408003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.408031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.408160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.994 [2024-07-21 08:33:28.408186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.994 qpair failed and we were unable to recover it. 00:37:18.994 [2024-07-21 08:33:28.408296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.408321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.408483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.408509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.408673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.408700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.408863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.408888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.408986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.409191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.409344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.409532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.409688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.409901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.409929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.410936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.411965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.411990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.412171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.412200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.412342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.412368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.412474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.412500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.412649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.412679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.412817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.412846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.413864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.413995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.414173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.414377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.414541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.414737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.414896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.414922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.415048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.415074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.415229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.995 [2024-07-21 08:33:28.415254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.995 qpair failed and we were unable to recover it. 00:37:18.995 [2024-07-21 08:33:28.415432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.415458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.415583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.415609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.415770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.415796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.415922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.415964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.416099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.416316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.416486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.416667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.416842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.416998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.417954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.417979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.418959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.418985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.419844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.419989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.420197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.420376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.420502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.420707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.420881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.420926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.996 [2024-07-21 08:33:28.421869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.996 [2024-07-21 08:33:28.421895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.996 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.422847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.422876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.423962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.423988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.424145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.424186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.424361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.424387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.424509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.424535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.424689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.424716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.424877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.424906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.425876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.425920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.426910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.426939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.427958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.427984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.428123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.428157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.428288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.428315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.428471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.428497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.428627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.428654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.997 qpair failed and we were unable to recover it. 00:37:18.997 [2024-07-21 08:33:28.428778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.997 [2024-07-21 08:33:28.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.428989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.429870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.429895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.430917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.430946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.431932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.431958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.432926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.432952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.433938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.433979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.434111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.434137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.434270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.998 [2024-07-21 08:33:28.434297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.998 qpair failed and we were unable to recover it. 00:37:18.998 [2024-07-21 08:33:28.434421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.434446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.434570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.434600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.434767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.434792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.434893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.434918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.435842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.435868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.436968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.436994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.437145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.437294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.437441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.437634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.437812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.437974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.438131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.438308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.438456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.438634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.438820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.438846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.439841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.439867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.440053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.440081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.440221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.440248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.440403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.440447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.440631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.440660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.440829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.440857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.441030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.441056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:18.999 [2024-07-21 08:33:28.441152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.999 [2024-07-21 08:33:28.441177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:18.999 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.441362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.441392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.441546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.441572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.441728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.441754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.441929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.441957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.442842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.442868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.443831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.443982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.444851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.444991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.445952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.445977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.446923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.446949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.447104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.000 [2024-07-21 08:33:28.447129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-21 08:33:28.447289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.447315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.447472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.447499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.447661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.447688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.447812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.447838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.447995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.448958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.448984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.449902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.449930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.450937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.450963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.451132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.451271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.451472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.451655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.451857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.451973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.452001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.452166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.452191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.452312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.452338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.452492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.452535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.452695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.001 [2024-07-21 08:33:28.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-21 08:33:28.452867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.453860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.453887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.454870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.454913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.455897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.455938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.456122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.456275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.456426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.456604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.456786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.456983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.457853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.457991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.458016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.458135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.458176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.458319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.458346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.458477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.458503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.002 qpair failed and we were unable to recover it. 00:37:19.002 [2024-07-21 08:33:28.458672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.002 [2024-07-21 08:33:28.458700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.458826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.458855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.458974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.459888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.459914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.460854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.460880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.461958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.461984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.462151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.462322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.462452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.462634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.462832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.462978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.463923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.463949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.464885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.464911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.465008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.003 [2024-07-21 08:33:28.465034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.003 qpair failed and we were unable to recover it. 00:37:19.003 [2024-07-21 08:33:28.465199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.465325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.465456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.465655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.465795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.465942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.465973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.466149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.466176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.466281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.466324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.466435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.466464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.466641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.466826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.466852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.467881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.467911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.468956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.468982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.469127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.469152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.469306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.469332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.469510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.469536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.469694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.469721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.469874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.469903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.470090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.470116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.470276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.470318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.470476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.470502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.470673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.470703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.470840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.470870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.471882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.471908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.472035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.472061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.004 [2024-07-21 08:33:28.472188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.004 [2024-07-21 08:33:28.472216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.004 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.472382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.472408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.472559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.472585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.472697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.472724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.472853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.473946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.473972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.474937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.474969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.475127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.475156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.475307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.475334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.475437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.475463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.475641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.475668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.475795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.475821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.476880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.476998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.477144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.477318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.477498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.477711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.477838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.477863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.478876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.478989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.479017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.005 qpair failed and we were unable to recover it. 00:37:19.005 [2024-07-21 08:33:28.479184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.005 [2024-07-21 08:33:28.479213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.479361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.479387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.479512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.479538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.479678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.479705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.479833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.479858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.479989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.480148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.480336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.480488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.480642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.480839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.480865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.481905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.481932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.482886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.482912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.483070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.483268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.483465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.483649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.483820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.483973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.484839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.484966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.485803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.006 [2024-07-21 08:33:28.485972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.006 [2024-07-21 08:33:28.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.006 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.486937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.486963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.487921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.487950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.488097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.488122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.488277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.488307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.488477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.488504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.488680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.488709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.488861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.488886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.489936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.489965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.490947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.490972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.491924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.007 [2024-07-21 08:33:28.491952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.007 qpair failed and we were unable to recover it. 00:37:19.007 [2024-07-21 08:33:28.492093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.492236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.492367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.492534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.492744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.492912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.492938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.493861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.493887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.494900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.494999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.495856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.495996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.496173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.496320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.496466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.496664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.496858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.496883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.008 qpair failed and we were unable to recover it. 00:37:19.008 [2024-07-21 08:33:28.497805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.008 [2024-07-21 08:33:28.497831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.497962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.497987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.498138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.498164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.498263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.498306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.498449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.498477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.498646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.498675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.498851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.498877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.499820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.499846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.500895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.500921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.501119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.501264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.501448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.501654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.501827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.501965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.502836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.502997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.503150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.503333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.503498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.503679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.503841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.503867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.504021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.504050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.504216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.504244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.504385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.504411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.504538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.009 [2024-07-21 08:33:28.504564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.009 qpair failed and we were unable to recover it. 00:37:19.009 [2024-07-21 08:33:28.504725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.504751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.504880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.504907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.505960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.505987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.506867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.506893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.507903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.508861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.508889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.509945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.509971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.510074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.510193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.510219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.510320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.510347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.510509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.510536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.010 [2024-07-21 08:33:28.510672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.010 [2024-07-21 08:33:28.510699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.010 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.510799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.510826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.510922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.510947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.511930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.511955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.512944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.512970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.513942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.513968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.514917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.515054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.515080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.515209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.515235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.515354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.011 [2024-07-21 08:33:28.515379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.011 qpair failed and we were unable to recover it. 00:37:19.011 [2024-07-21 08:33:28.515537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.515562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.515685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.515712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.515838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.515863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.515966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.515991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.516882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.516909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.517930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.517958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.518191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.518380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.518561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.518722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.518855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.518963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.519967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.519993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.520973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.520998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.521825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.521851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.522009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.522035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.012 [2024-07-21 08:33:28.522156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.012 [2024-07-21 08:33:28.522182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.012 qpair failed and we were unable to recover it. 00:37:19.013 [2024-07-21 08:33:28.522287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.013 [2024-07-21 08:33:28.522313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.013 qpair failed and we were unable to recover it. 00:37:19.013 [2024-07-21 08:33:28.522448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.522474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.522575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.522603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.522738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.522764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.522907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.522936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.523889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.523986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.524011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.524166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.524192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.524322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.524347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.524501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.524526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.014 [2024-07-21 08:33:28.524688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.014 [2024-07-21 08:33:28.524715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.014 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.524843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.524870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.525932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.525957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.526955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.526981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.298 [2024-07-21 08:33:28.527842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.298 [2024-07-21 08:33:28.527868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.298 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.527992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.528929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.528955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.529972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.529998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.530905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.530930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.531860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.531984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.532873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.532898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.533870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.533902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.534002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.299 [2024-07-21 08:33:28.534027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.299 qpair failed and we were unable to recover it. 00:37:19.299 [2024-07-21 08:33:28.534149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.534291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.534421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.534607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.534742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.534898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.534925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.535881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.535907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.536947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.536973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.537907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.537933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.538918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.538944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.539823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.539850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.540004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.540187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.540213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.300 [2024-07-21 08:33:28.540369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.300 [2024-07-21 08:33:28.540395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.300 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.540524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.540550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.540672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.540699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.540825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.540851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.540984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.541838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.541867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.542912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.542943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.543889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.543916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.544951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.544980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.545097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.545139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.545303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.545333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.545515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.545544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.545668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.545695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.545844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.545873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.546886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.546911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.547097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.301 [2024-07-21 08:33:28.547126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.301 qpair failed and we were unable to recover it. 00:37:19.301 [2024-07-21 08:33:28.547278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.547325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.547442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.547575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.547600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.547735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.547762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.547868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.547894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.547997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.548967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.548993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.549831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.549857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.550868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.550893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.551918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.551945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.552041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.302 [2024-07-21 08:33:28.552066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.302 qpair failed and we were unable to recover it. 00:37:19.302 [2024-07-21 08:33:28.552193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.552359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.552502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.552654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.552804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.552963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.552990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.553857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.553982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.554832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.554985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.555952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.555977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.556895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.556920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.557879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.557905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.558007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.558033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.558162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.558189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.558348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.558374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.558489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.303 [2024-07-21 08:33:28.558517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.303 qpair failed and we were unable to recover it. 00:37:19.303 [2024-07-21 08:33:28.558664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.558708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.558815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.558840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.558941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.558966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.559901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.559928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.560851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.560880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.561829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.561856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.562879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.562905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.563948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.563991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.564957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.564982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.304 [2024-07-21 08:33:28.565118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.304 [2024-07-21 08:33:28.565145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.304 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.565274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.565300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.565403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.565430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.565529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.565554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.565725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.565754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.565867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.565894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.566957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.566983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.567937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.567966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.568965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.568996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.569149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.569175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.569313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.569343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.569495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.569521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.569625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.569652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.569799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.569828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.570933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.570960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.571094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.571121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.571253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.571279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.571430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.571456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.571564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.305 [2024-07-21 08:33:28.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.305 qpair failed and we were unable to recover it. 00:37:19.305 [2024-07-21 08:33:28.571769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.571810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.571990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.572850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.572996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.573947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.573977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.574945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.574988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.575875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.575901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.576909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.576935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.577885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.577911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.578042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.578067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.306 qpair failed and we were unable to recover it. 00:37:19.306 [2024-07-21 08:33:28.578200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.306 [2024-07-21 08:33:28.578226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.578358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.578513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.578539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.578642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.578668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.578798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.578824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.578951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.578977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.579961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.579992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.580845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.580872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.581854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.581985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.582885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.582911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.583043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.583069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.583175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.307 [2024-07-21 08:33:28.583204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.307 qpair failed and we were unable to recover it. 00:37:19.307 [2024-07-21 08:33:28.583299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.583324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.583452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.583478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.583582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.583608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.583722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.583748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.583882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.583908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.584921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.584950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.585908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.585937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.586921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.586951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.587973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.587999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.588970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.588999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.589187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.589216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.589400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.589429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.589576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.589602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.589755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.589785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.308 [2024-07-21 08:33:28.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.308 [2024-07-21 08:33:28.590024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.308 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.590196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.590225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.590379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.590408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.590550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.590582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.590739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.590766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.590922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.590949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.591931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.591956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.592898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.592924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.593848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.593876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.594934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.594959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.595891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.595918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.596069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.596095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.596220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.596246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.596371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.309 [2024-07-21 08:33:28.596399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.309 qpair failed and we were unable to recover it. 00:37:19.309 [2024-07-21 08:33:28.596522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.596549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.596675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.596702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.596860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.596886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.596987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.597891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.597918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.598957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.598987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.599884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.599910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.600948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.600975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.601892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.601918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.602070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.602218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.602351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.602525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.310 [2024-07-21 08:33:28.602658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.310 qpair failed and we were unable to recover it. 00:37:19.310 [2024-07-21 08:33:28.602781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.602806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.602910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.602937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.603889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.603992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.604851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.604975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.605876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.605902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.606939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.606965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.607870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.607894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.311 qpair failed and we were unable to recover it. 00:37:19.311 [2024-07-21 08:33:28.608022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.311 [2024-07-21 08:33:28.608048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.608912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.608939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.609904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.609998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.610854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.610992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.611929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.611954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.612840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.612866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.613867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.613990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.614015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.614136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.614162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.614314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.312 [2024-07-21 08:33:28.614340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.312 qpair failed and we were unable to recover it. 00:37:19.312 [2024-07-21 08:33:28.614463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.614492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.614674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.614701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.614840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.614869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.615946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.615972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.616940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.616969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.617928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.617954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.618948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.618977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.619089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.619119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.619277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.619303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.619469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.619497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.619603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.619671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.619839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.619881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.620874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.620899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.621029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.621055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.313 [2024-07-21 08:33:28.621239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.313 [2024-07-21 08:33:28.621268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.313 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.621415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.621441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.621569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.621611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.621792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.621822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.621932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.621961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.622937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.622963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.623167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.623335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.623513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.623666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.623807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.623982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.624163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.624302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.624427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.624627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.624822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.624850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.625888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.625933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.626871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.626976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.627961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.627991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.314 [2024-07-21 08:33:28.628106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.314 [2024-07-21 08:33:28.628133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.314 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.628268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.628294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.628429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.628455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.628642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.628672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.628787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.628813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.628943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.628969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.629156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.629182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.629317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.629343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.629497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.629524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.629625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.629668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.629806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.629836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.630904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.630930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.631971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.631999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.632881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.632907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.633871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.315 [2024-07-21 08:33:28.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.315 qpair failed and we were unable to recover it. 00:37:19.315 [2024-07-21 08:33:28.634072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.634248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.634447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.634605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.634792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.634951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.634977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.635114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.635156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.635339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.635366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.635507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.635533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.635693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.635720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.635853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.635878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.636899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.636993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.637164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.637331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.637538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.637697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.637851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.637880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.638846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.638872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.639963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.639993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.640146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.640173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.640294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.640321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.640451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.640480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.316 [2024-07-21 08:33:28.640649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.316 [2024-07-21 08:33:28.640678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.316 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.640822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.640848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.640955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.640981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.641857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.641883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.642908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.642934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.643877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.643903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.644844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.644887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.645809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.645838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.646908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.646935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.647074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.647100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.317 [2024-07-21 08:33:28.647253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.317 [2024-07-21 08:33:28.647279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.317 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.647388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.647431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.647546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.647576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.647724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.647751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.647880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.647906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.647999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.648960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.648989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.649938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.649964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.650932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.650961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.651934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.651960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.652866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.652892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.653093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.653119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.653218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.653261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.653432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.653461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.653597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.653635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.318 qpair failed and we were unable to recover it. 00:37:19.318 [2024-07-21 08:33:28.653789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.318 [2024-07-21 08:33:28.653815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.653944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.653973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.654961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.654988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.655940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.655966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.656145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.656362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.656516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.656677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.656844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.656995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.657968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.657994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.658144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.658295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.658500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.658666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.658807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.658984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.659017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.659194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.659220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.659393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.659421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.659607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.659639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.659776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.659802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.319 qpair failed and we were unable to recover it. 00:37:19.319 [2024-07-21 08:33:28.660002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.319 [2024-07-21 08:33:28.660028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.660157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.660184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.660374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.660402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.660575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.660605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.660755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.660782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.660913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.660939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.661895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.661989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.662144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.662333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.662499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.662689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.662850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.662894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.663837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.663995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.664145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.664345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.664520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.664706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.664861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.664887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.665933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.665960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.666088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.666130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.666295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.666323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.666463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.320 [2024-07-21 08:33:28.666491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.320 qpair failed and we were unable to recover it. 00:37:19.320 [2024-07-21 08:33:28.666664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.666691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.666838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.666868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.667861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.667888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.668868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.668919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.669957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.669983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.670882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.670907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.671847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.671874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.672811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.672971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.673000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.673174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.673203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.321 [2024-07-21 08:33:28.673361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.321 [2024-07-21 08:33:28.673388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.321 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.673513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.673539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.673685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.673714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.673879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.673909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.674916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.674945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.675106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.675134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.675278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.675307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.675481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.675511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.675611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.675649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.675811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.675838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.676871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.676897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.677891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.677919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.678878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.678904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.679916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.679944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.680091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.680124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.680253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.322 [2024-07-21 08:33:28.680279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.322 qpair failed and we were unable to recover it. 00:37:19.322 [2024-07-21 08:33:28.680383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.680410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.680544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.680570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.680720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.680747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.680848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.680874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.680972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.680999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.681955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.681981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.682172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.682198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.682350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.682383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.682490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.682691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.682721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.682872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.682898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.683844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.683888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.684910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.684939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.685954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.685983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.323 [2024-07-21 08:33:28.686969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.323 [2024-07-21 08:33:28.686995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.323 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.687136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.687164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.687300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.687329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.687490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.687516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.687700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.687731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.687878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.687903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.688943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.688973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.689128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.689157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.689262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.689291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.689412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.689456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.689606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.689644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.689788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.689815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.690935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.690961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.691922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.691951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.692123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.692149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.692305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.692331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.692460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.692489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.692643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.692674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.324 [2024-07-21 08:33:28.692851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.324 [2024-07-21 08:33:28.692877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.324 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.692973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.693161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.693280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.693482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.693658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.693843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.693869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.694867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.694893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.695819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.695862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.696851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.696878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.697935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.697967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.698178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.698377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.698517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.698681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.698860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.698984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.699010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.325 qpair failed and we were unable to recover it. 00:37:19.325 [2024-07-21 08:33:28.699109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.325 [2024-07-21 08:33:28.699136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.699247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.699290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.699422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.699448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.699573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.699604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.699740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.699766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.699903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.699929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.700937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.700967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.701871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.701915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.702906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.702933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.703970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.703999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.704165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.704338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.704488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.704669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.704837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.704974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.705002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.705107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.705134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.705310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.705339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.705471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.326 [2024-07-21 08:33:28.705500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.326 qpair failed and we were unable to recover it. 00:37:19.326 [2024-07-21 08:33:28.705680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.705707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.705835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.705861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.705966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.705992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.706187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.706343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.706541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.706692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.706853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.706977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.707917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.707944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.708131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.708305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.708500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.708679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.708837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.708990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.709170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.709340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.709503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.709671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.709822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.709850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.710850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.710894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.711934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.711962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.712120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.327 [2024-07-21 08:33:28.712147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.327 qpair failed and we were unable to recover it. 00:37:19.327 [2024-07-21 08:33:28.712296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.712325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.712436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.712464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.712579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.712605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.712770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.712796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.712965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.712991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.713168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.713321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.713677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.713855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.713981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.714918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.714943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.715942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.715972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.716130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.716259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.716465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.716666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.716844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.716979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.717008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.717176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.717206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.717353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.717379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.717504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.328 [2024-07-21 08:33:28.717549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.328 qpair failed and we were unable to recover it. 00:37:19.328 [2024-07-21 08:33:28.717693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.717723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.717853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.717882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.718889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.718918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.719965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.719991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.720945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.720989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.721328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.721509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.721672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.721830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.721971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.722925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.722951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.723075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.723101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.723268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.723297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.723497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.723526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.723682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.723709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.723827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.723853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.724022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.724049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.724202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.329 [2024-07-21 08:33:28.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.329 qpair failed and we were unable to recover it. 00:37:19.329 [2024-07-21 08:33:28.724345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.724387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.724526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.724555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.724701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.724731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.724883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.724909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.725851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.725880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.726899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.726925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.727953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.727981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.728163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.728190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.728322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.728348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.728497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.728524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.728675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.728704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.728879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.728905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.729952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.729978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.730133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.730299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.730471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.730654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.330 [2024-07-21 08:33:28.730785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.330 qpair failed and we were unable to recover it. 00:37:19.330 [2024-07-21 08:33:28.730913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.730940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.731863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.731905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.732858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.732901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.733836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.733985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.734172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.734378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.734531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.734714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.734854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.734879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.735824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.735866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.736885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.736929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.737056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.737083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.737209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.737235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.737360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.737389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.331 [2024-07-21 08:33:28.737565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.331 [2024-07-21 08:33:28.737592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.331 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.737775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.737814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.737937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.738948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.738973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.739150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.739327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.739500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.739645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.739835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.739983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.740937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.740984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.741961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.742876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.742902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.743028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.743163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.743320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.743448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.332 [2024-07-21 08:33:28.743599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.332 qpair failed and we were unable to recover it. 00:37:19.332 [2024-07-21 08:33:28.743743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.743769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.743896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.743922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.744961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.744988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.745947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.745975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.746875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.746900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.747839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.747866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.748833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.748876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.749018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.333 [2024-07-21 08:33:28.749046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.333 qpair failed and we were unable to recover it. 00:37:19.333 [2024-07-21 08:33:28.749215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.749334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.749475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.749597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.749765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.749915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.749941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.750914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.750943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.751883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.751909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.752880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.752989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.753918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.753944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.754852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.754878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.755000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.755026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.755181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.755207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.755335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.755361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.334 [2024-07-21 08:33:28.755497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.334 [2024-07-21 08:33:28.755523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.334 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.755651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.755677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.755784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.755811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.755906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.755933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.756972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.756998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.757208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.757235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.757390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.757545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.757571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.757722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.757766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.757913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.757942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.758109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.758154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.758304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.758330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.758465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.758491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.758591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.758626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.758776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.758819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.759948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.759991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.760101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.760129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.760311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.760344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.760471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.760498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.760638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.760665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.760804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.761852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.761963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.762007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.762122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.762149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.762273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.335 [2024-07-21 08:33:28.762300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.335 qpair failed and we were unable to recover it. 00:37:19.335 [2024-07-21 08:33:28.762391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.762417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.762527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.762704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.762743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.762907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.762935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.763869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.763898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.764860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.764975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.765143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.765315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.765489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.765672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.765822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.765847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.766825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.766851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.767927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.767957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.768897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.768939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.769105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.769133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.336 qpair failed and we were unable to recover it. 00:37:19.336 [2024-07-21 08:33:28.769267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.336 [2024-07-21 08:33:28.769300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.769433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.769462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.769585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.769611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.769750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.769775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.769868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.769893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.770924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.770953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.771906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.771931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.772095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.772123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.772290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.772318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.772483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.772512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.772689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.772716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.772882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.772920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.773883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.773914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.774952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.774998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.775154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.775180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.337 [2024-07-21 08:33:28.775304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.337 [2024-07-21 08:33:28.775330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.337 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.775428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.775455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.775579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.775605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.775740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.775784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.775897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.775925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.776827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.776852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.777915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.777940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.778164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.778341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.778471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.778600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.778795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.778973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.779946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.779991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.780137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.780181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.780340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.780367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.780497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.780523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.780628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.780655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.780840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.780885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.338 [2024-07-21 08:33:28.781946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.338 [2024-07-21 08:33:28.781976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.338 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.782928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.782955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.783915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.783942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.784134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.784286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.784601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.784802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.784979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.785959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.785985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.786923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.787203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.787388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.787521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.787655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.787852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.787999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.788198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.788357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.788545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.788730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.339 [2024-07-21 08:33:28.788901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.339 [2024-07-21 08:33:28.788930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.339 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.789069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.789110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.789268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.789297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.789444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.789496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.789654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.789686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.789814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.789840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.790866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.790911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.791905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.791932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.792967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.792992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.793904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.793929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.794107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.794308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.794471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.794663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.794842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.794983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.795150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.795346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.795505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.795640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.340 [2024-07-21 08:33:28.795809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.340 [2024-07-21 08:33:28.795834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.340 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.795956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.795982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.796919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.796947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.797936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.797961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.798884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.799876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.799902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.800847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.800975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.801001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.801177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.801205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.801337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.801366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.801497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.801526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.801638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.341 [2024-07-21 08:33:28.801679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.341 qpair failed and we were unable to recover it. 00:37:19.341 [2024-07-21 08:33:28.801784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.801916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.801943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.802934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.802963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.803827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.803853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.804958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.804986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.805957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.805986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.806126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.806153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.806310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.806337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.806502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.806530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.342 [2024-07-21 08:33:28.806648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.342 [2024-07-21 08:33:28.806701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.342 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.806839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.806864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.806983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.807931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.807956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.808146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.808188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.808410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.808439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.808544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.808573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.808716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.808741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.808843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.808868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.809951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.809976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.810913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.810937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.811937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.811972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.812881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.812907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.813010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.813035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.813165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.813190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.343 [2024-07-21 08:33:28.813344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.343 [2024-07-21 08:33:28.813369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.343 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.813536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.813562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.813662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.813687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.813776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.813802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.813901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.813943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.814955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.814980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.815915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.815940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.816952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.816977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.817940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.817964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.818972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.818997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.819094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.819119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.819264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.819291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.819406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.819448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.819547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.344 [2024-07-21 08:33:28.819573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.344 qpair failed and we were unable to recover it. 00:37:19.344 [2024-07-21 08:33:28.819680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.819705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.819803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.819828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.819925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.819949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.820945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.820971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.821122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.821150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.821330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.821355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.821496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.821522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.821691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.821720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.821840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.821868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.822869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.822894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.823871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.823895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.824865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.824891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.825873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.825902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.826017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.826059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.826179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.345 [2024-07-21 08:33:28.826204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.345 qpair failed and we were unable to recover it. 00:37:19.345 [2024-07-21 08:33:28.826311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.826336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.826488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.826513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.826611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.826644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.826797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.826822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.826912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.826940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.827856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.827884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.828956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.828980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.829964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.829989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.830919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.830948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.831121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.831298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.831323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.831426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.831466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.831578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.346 [2024-07-21 08:33:28.831604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.346 qpair failed and we were unable to recover it. 00:37:19.346 [2024-07-21 08:33:28.831758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.831786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.831930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.831954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.832833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.833963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.833988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.834946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.834975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.835934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.835959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.836950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.836978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.837937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.837982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.838112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.838140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.838251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.347 [2024-07-21 08:33:28.838279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.347 qpair failed and we were unable to recover it. 00:37:19.347 [2024-07-21 08:33:28.838426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.838453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.838551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.838577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.838775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.838801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.838896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.838921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.839866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.839907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.840878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.840907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.841963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.841991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.842938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.842963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.843820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.844875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.845023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.845048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.348 [2024-07-21 08:33:28.845139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.348 [2024-07-21 08:33:28.845164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.348 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.845272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.845298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.845397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.845423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.845524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.845551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.845701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.845727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.845872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.845900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.846868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.846995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.847151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.847305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.847512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.847701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.847877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.847902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.848969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.848994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.849873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.849999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.850866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.850891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.851020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.851045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.851148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.851172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.851298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.851322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.349 [2024-07-21 08:33:28.851460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.349 [2024-07-21 08:33:28.851485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.349 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.851624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.851650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.851778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.851804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.851932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.851957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.852971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.852997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.853961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.853990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.854155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.854330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.854478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.854661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.854976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.855937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.855965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.856101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.856129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.856273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.856299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.856425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.856451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.856631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.856659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.856821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.856845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.857007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.857032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.857164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.350 [2024-07-21 08:33:28.857189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.350 qpair failed and we were unable to recover it. 00:37:19.350 [2024-07-21 08:33:28.857291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.857317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.857470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.857495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.857641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.857667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.857773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.857816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.857981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.858194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.858368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.858533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.858757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.858907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.858932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.859866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.859891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.860957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.860982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.861957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.861985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.862943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.862967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.863144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.863354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.863516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.863690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.863836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.863984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.864011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.351 qpair failed and we were unable to recover it. 00:37:19.351 [2024-07-21 08:33:28.864151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.351 [2024-07-21 08:33:28.864179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.864354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.864380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.864558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.864585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.864719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.864745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.864853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.864880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.865920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.865948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.866934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.866959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.867906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.867933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.868932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.868960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.869907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.870929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.870957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.352 [2024-07-21 08:33:28.871103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.352 [2024-07-21 08:33:28.871128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.352 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.871262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.871303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.871415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.871442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.871610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.871645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.871802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.871827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.871945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.871970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.872949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.872976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.873916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.873941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.874965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.874991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.875864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.876905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.876930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.877051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.877078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.877257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.877281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.877384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.877409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.877561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.353 [2024-07-21 08:33:28.877585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.353 qpair failed and we were unable to recover it. 00:37:19.353 [2024-07-21 08:33:28.877716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.877748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.877920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.877949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.878129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.878284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.878441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.878681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.878819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.878994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.879169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.879307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.879484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.879648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.879845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.880951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.880991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.881108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.881137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.881279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.881306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.881458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.881484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.881641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.881679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.881859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.881888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.882862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.882887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.883966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.883992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.884124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.884149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.884269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.884295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.884446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.884478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.884580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.354 [2024-07-21 08:33:28.884605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.354 qpair failed and we were unable to recover it. 00:37:19.354 [2024-07-21 08:33:28.884775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.884808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.884931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.885849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.885875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.886924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.886950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.887919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.887947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.888965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.888991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.889832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.889975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.890004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.890127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.890151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.890283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.890309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.355 [2024-07-21 08:33:28.890477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.355 [2024-07-21 08:33:28.890505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.355 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.890642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.890678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.890833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.890858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.891954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.891981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.892935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.892960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.893968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.894111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.894136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.894267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.894293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.894441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.894480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.894663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.894692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.894848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.894874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.895871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.895913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.896940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.896965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.897066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.897091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.897240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.897265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.356 [2024-07-21 08:33:28.897365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.356 [2024-07-21 08:33:28.897392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.356 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.897517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.897543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.897698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.897724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.897819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.897845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.898905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.898932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.899858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.899989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.900908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.900999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.901958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.901983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.902964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.902990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.903117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.903142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.903237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.903263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.357 [2024-07-21 08:33:28.903371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.357 [2024-07-21 08:33:28.903397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.357 qpair failed and we were unable to recover it. 00:37:19.643 [2024-07-21 08:33:28.903506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.643 [2024-07-21 08:33:28.903532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.643 qpair failed and we were unable to recover it. 00:37:19.643 [2024-07-21 08:33:28.903698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.643 [2024-07-21 08:33:28.903739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.903856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.903885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.904792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65b610 is same with the state(5) to be set 00:37:19.644 [2024-07-21 08:33:28.904968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.904995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.905862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.905888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.906856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.906886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.907906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.907931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.908809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.908835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.909001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.909025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.909155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.909180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.909290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.644 [2024-07-21 08:33:28.909315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.644 qpair failed and we were unable to recover it. 00:37:19.644 [2024-07-21 08:33:28.909440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.909472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.909608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.909660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.909789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.909814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.909967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.909993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.910872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.910898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.911848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.911875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.912877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.912902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.913929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.913956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.914918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.914944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.915042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.915068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.915162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.915191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.915324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.915350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.915507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.915532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.645 [2024-07-21 08:33:28.915653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.645 [2024-07-21 08:33:28.915692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.645 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.915830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.915857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.915989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.916869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.916994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.917893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.917917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.918937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.918976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.919874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.919902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.920907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.920932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.921055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.921080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.921209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.921235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.921361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.921388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.921555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.646 [2024-07-21 08:33:28.921581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.646 qpair failed and we were unable to recover it. 00:37:19.646 [2024-07-21 08:33:28.921728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.921753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.921914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.921939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.922848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.922975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.923126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.923326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.923487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.923634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.923815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.923844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.924871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.924896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.925874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.925984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.926898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.926925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.647 [2024-07-21 08:33:28.927964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.647 [2024-07-21 08:33:28.927989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.647 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.928972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.928997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.929914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.929939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.930919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.930945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.931942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.931967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.932123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.932307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.932335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.932439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.932465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.648 qpair failed and we were unable to recover it. 00:37:19.648 [2024-07-21 08:33:28.932587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.648 [2024-07-21 08:33:28.932618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.932730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.932755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.932868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.933870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.933895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.934847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.934873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.935916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.935941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.936865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.936988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.937942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.937969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.649 [2024-07-21 08:33:28.938871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.649 [2024-07-21 08:33:28.938897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.649 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.938995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.939896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.939922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.940881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.940906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.941115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.941140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.941272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.941299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.941485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.941528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.941712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.941738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.941941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.941968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.942883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.942975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.943158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.943310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.943543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.943708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.943886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.943913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.944900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.944926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.945055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.945081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.650 qpair failed and we were unable to recover it. 00:37:19.650 [2024-07-21 08:33:28.945210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.650 [2024-07-21 08:33:28.945236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.945372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.945528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.945556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.945707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.945735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.945842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.945868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.946854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.946982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.947952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.947978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.948104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.948149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.948355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.948381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.948508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.948534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.948638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.948667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.948805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.948832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.949885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.949911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.950186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.950326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.950492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.950693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.950852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.950991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.651 [2024-07-21 08:33:28.951898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.651 [2024-07-21 08:33:28.951927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.651 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.952852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.952878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.953905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.953930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.954966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.954992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.955121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.955163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.955300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.955330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.955472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.955500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.955689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.955715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.955863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.955906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.956942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.956968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.957122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.957321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.957488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.957632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.957812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.652 qpair failed and we were unable to recover it. 00:37:19.652 [2024-07-21 08:33:28.957981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.652 [2024-07-21 08:33:28.958026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.958172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.958217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.958363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.958407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.958537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.958563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.958723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.958766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.958893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.958923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.959064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.959257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.959420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.959801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.959968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.960292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.960465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.960600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.960764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.960938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.960983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.961950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.961978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.962093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.962123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.962292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.962321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.962477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.962502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.962634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.962661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.962795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.962821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.963884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.963911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.964093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.964138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.964314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.964355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.964482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.964508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.964601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.964657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.964824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.964867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.653 qpair failed and we were unable to recover it. 00:37:19.653 [2024-07-21 08:33:28.965032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.653 [2024-07-21 08:33:28.965059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.965211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.965353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.965523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.965685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.965855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.965974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.966209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.966379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.966536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.966717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.966905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.966949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.967929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.967955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.968967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.969809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.969995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.970170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.970305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.970488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.970636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.970824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.970852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.971059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.971249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.971419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.971542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.654 [2024-07-21 08:33:28.971728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.654 qpair failed and we were unable to recover it. 00:37:19.654 [2024-07-21 08:33:28.971842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.971869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.972859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.972994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.973865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.973994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.974170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.974317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.974511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.974676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.974807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.974833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.975000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.975040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.975232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.975263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.975413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.975443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.975610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.975642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.975824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.975854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.976842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.976870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.977869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.977990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.978015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.978145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.978171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.978302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.978327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.978455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.978480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.978618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.655 [2024-07-21 08:33:28.978670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.655 qpair failed and we were unable to recover it. 00:37:19.655 [2024-07-21 08:33:28.978844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.978873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.979947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.979972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.980888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.980917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.981880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.981931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.982972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.982999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.983149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.983328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.983477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.983717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.983882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.983990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.984933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.984957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.985053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.985077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.985232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.985258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.985353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.656 [2024-07-21 08:33:28.985378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.656 qpair failed and we were unable to recover it. 00:37:19.656 [2024-07-21 08:33:28.985536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.985580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.985744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.985772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.985942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.985969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.986952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.986978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.987913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.987940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.988094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.988119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.988253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.988280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.988463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.988493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.988638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.988687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.988847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.989939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.989966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.990116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.990297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.990487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.990664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.657 [2024-07-21 08:33:28.990834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.657 qpair failed and we were unable to recover it. 00:37:19.657 [2024-07-21 08:33:28.990984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.991166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.991318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.991485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.991660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.991848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.991874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.992827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.992854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.993891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.993996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.994972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.995967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.995993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.996928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.996954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.997085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.997116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.997243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.997269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.997422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.658 [2024-07-21 08:33:28.997448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.658 qpair failed and we were unable to recover it. 00:37:19.658 [2024-07-21 08:33:28.997582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.997608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.997711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.997838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.997864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.997969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.997997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.998958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.998984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:28.999973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:28.999999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.000888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.000915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.001098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.001239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.001395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.001586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.001777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.001948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.002115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.002316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.002519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.002728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.002964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.003151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.003181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.003366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.003393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.003527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.003554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.003773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.003805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.003953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.003986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.004128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.004156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.004323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.004351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.659 [2024-07-21 08:33:29.004474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.659 [2024-07-21 08:33:29.004517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.659 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.004709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.004737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.004944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.005186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.005350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.005508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.005641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.005855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.006873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.006901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.007851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.007896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.008888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.008932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.009930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.009991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.010142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.010198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.010343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.010387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.010517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.010543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.010715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.010744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.010876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.011099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.011143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.011266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.011308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.011466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.011492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.011624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.011651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.660 [2024-07-21 08:33:29.011754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.660 [2024-07-21 08:33:29.011780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.660 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.011938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.011964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.012860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.012995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.013168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.013374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.013535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.013710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.013933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.013976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.014156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.014200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.014345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.014373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.014521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.014672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.014727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.014880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.014923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.015080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.015124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.015283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.015309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.015458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.015484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.015640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.015666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.015877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.015903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.016919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.016961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.017168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.017211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.017414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.017440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.017598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.017629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.017795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.017841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.017955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.017984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.661 qpair failed and we were unable to recover it. 00:37:19.661 [2024-07-21 08:33:29.018146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.661 [2024-07-21 08:33:29.018189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.018362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.018406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.018541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.018568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.018760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.018804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.018985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.019173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.019377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.019502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.019677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.019876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.019920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.020932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.020979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.021913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.021939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.022872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.022916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.023067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.023112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.023277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.023303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.023436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.023462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.023601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.023659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.023814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.023860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.024004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.024051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.024258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.024284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.024438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.024463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.024627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.024654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.024830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.024873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.025017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.025061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.025193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.662 [2024-07-21 08:33:29.025373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.662 qpair failed and we were unable to recover it. 00:37:19.662 [2024-07-21 08:33:29.025502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.025528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.025663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.025689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.025817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.025843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.026943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.026969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.027120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.027305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.027488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.027651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.027843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.027975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.028795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.028982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.029156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.029308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.029469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.029667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.029840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.029869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.030025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.030054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.030198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.030226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.030388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.030415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.030579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.030608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.030783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.030811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.031111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.031168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.031332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.031360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.031497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.031524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.031658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.031687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.031867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.031912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.032033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.032063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.032224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.032268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.032388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.032414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.032554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.663 [2024-07-21 08:33:29.032580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.663 qpair failed and we were unable to recover it. 00:37:19.663 [2024-07-21 08:33:29.032766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.032813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.032960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.033949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.033977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.034170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.034298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.034502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.034638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.034841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.034990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.035217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.035389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.035551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.035699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.035900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.035948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.036856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.036882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.037050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.037074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.037180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.037223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.037426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.037476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.037587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.037619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.037811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.037854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.038875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.038988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.039015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.039190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.039218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.039387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.039414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.039554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.039580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.039695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.664 [2024-07-21 08:33:29.039721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.664 qpair failed and we were unable to recover it. 00:37:19.664 [2024-07-21 08:33:29.039901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.039930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.040861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.040904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.041917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.041945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.042919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.042947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.043876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.043920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.044953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.044980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.045177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.045206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.045333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.045373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.045547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.045574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.045701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.665 [2024-07-21 08:33:29.045728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.665 qpair failed and we were unable to recover it. 00:37:19.665 [2024-07-21 08:33:29.045862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.045886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.046100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.046299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.046429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.046596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.046800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.046975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.047960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.047988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.048951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.048992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.049170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.049199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.049334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.049363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.049502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.049531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.049676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.049703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.049816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.049855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.050249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.050466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.050591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.050833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.050987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.051953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.051981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.052171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.052339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.052512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.052640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.666 [2024-07-21 08:33:29.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.666 qpair failed and we were unable to recover it. 00:37:19.666 [2024-07-21 08:33:29.052915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.052943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.053116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.053280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.053494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.053656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.053863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.053977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.054021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.054173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.054216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.054371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.054414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.054562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.054588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.054775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.054819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.054972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.055191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.055402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.055561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.055716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.055890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.055919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.056871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.056896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.057912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.057941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.058107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.058326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.058370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.058498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.058525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.058680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.058711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.058898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.058928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.059843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.059884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.060062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.667 [2024-07-21 08:33:29.060090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.667 qpair failed and we were unable to recover it. 00:37:19.667 [2024-07-21 08:33:29.060213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.060241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.060409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.060435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.060560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.060585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.060719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.060744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.060866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.060894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.061948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.061973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.062158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.062323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.062489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.062679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.062854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.062988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.063016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.063123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.063155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.063394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.063422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.063561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.063589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.063774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.063800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.064941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.064967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.065118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.065143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.065319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.065347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.065580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.065608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.065746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.065772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.065874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.065900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.066076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.668 [2024-07-21 08:33:29.066105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.668 qpair failed and we were unable to recover it. 00:37:19.668 [2024-07-21 08:33:29.066234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.066260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.066411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.066440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.066580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.066605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.066744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.066770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.066916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.066945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.067906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.067938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.068897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.068923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.069941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.069969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.070913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.070938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.071936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.071964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.072087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.669 [2024-07-21 08:33:29.072113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.669 qpair failed and we were unable to recover it. 00:37:19.669 [2024-07-21 08:33:29.072226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.072252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.072381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.072407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.072534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.072560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.072678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.072704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.072856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.072882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.072975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.073954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.073979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.074146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.074344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.074523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.074698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.074841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.074992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.075962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.075987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.076139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.076164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.076333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.076358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.076508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.076537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.670 [2024-07-21 08:33:29.076709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.670 [2024-07-21 08:33:29.076735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.670 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.076871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.076912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.077931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.077959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.078845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.078873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.079855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.079884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.080958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.080984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.081932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.081958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.082088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.082116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.082292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.082317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.082467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.082495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.671 [2024-07-21 08:33:29.082677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.671 [2024-07-21 08:33:29.082704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.671 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.082805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.082831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.082958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.082984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.083277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.083448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.083590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.083732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.083869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.083990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.084139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.084316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.084467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.084644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.084857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.084886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.085875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.085903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.086972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.086997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.087972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.087997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.088145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.088170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.088353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.088382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.088549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.088576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.088749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.088775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.088869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.088911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.089028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.089054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.672 [2024-07-21 08:33:29.089179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.672 [2024-07-21 08:33:29.089204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.672 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.089369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.089394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.089546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.089571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.089728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.089754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.089861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.089904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.090959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.090988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.091955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.091983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.092184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.092354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.092524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.092678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.092823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.092983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.093191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.093336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.093483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.093688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.093836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.093879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.094844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.094869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.095834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.095994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.673 [2024-07-21 08:33:29.096036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.673 qpair failed and we were unable to recover it. 00:37:19.673 [2024-07-21 08:33:29.096183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.096306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.096483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.096628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.096778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.096897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.096941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.097907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.097935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.098856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.098882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.099960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.099985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.100088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.100114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.100273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.100298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.100490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.100519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.100696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.100722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.100877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.100920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.101059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.674 [2024-07-21 08:33:29.101084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.674 qpair failed and we were unable to recover it. 00:37:19.674 [2024-07-21 08:33:29.101213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.101238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.101357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.101398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.101530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.101555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.101677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.101719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.101888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.101916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.102061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.102086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.675 qpair failed and we were unable to recover it. 00:37:19.675 [2024-07-21 08:33:29.102215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.675 [2024-07-21 08:33:29.102240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.689 qpair failed and we were unable to recover it. 00:37:19.689 [2024-07-21 08:33:29.102366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.689 [2024-07-21 08:33:29.102408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.102509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.102535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.102665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.102691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.102846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.102875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.103864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.103893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.104893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.104921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.105964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.105989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.106921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.106947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.107053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.107078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.107176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.107201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.107326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.107352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.107474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.107515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.690 [2024-07-21 08:33:29.107656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.690 [2024-07-21 08:33:29.107685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.690 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.107826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.107852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.107980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.108171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.108326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.108484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.108675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.108854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.108881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.109896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.109922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.110882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.110908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.111959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.111986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.112162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.112338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.112512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.112696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.112825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.112989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.113015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.113151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.113179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.113331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.113360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.113537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.691 [2024-07-21 08:33:29.113562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.691 qpair failed and we were unable to recover it. 00:37:19.691 [2024-07-21 08:33:29.113660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.113685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.113816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.113931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.113956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.114077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.114102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.114284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.114313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.114508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.114533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.114707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.114736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.114904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.114934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.115131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.115301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.115468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.115652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.115852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.115992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.116917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.116945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.117238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.117415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.117588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.117776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.117942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.118171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.118346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.118526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.118741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.118886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.118928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.692 qpair failed and we were unable to recover it. 00:37:19.692 [2024-07-21 08:33:29.119042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.692 [2024-07-21 08:33:29.119070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.119217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.119242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.119393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.119436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.119591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.119622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.119753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.119778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.119878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.119904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.120924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.120949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.121913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.121938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.122902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.122930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.123904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.123933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.124915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.124943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.125116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.125141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.125230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.125256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.125427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.125452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.125567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.125608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-07-21 08:33:29.125730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.693 [2024-07-21 08:33:29.125755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.125902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.125941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.126932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.126958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.127949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.127975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.128950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.128978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.129951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.129977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.130966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.130993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.131152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.694 [2024-07-21 08:33:29.131179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.694 qpair failed and we were unable to recover it. 00:37:19.694 [2024-07-21 08:33:29.131334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.131361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.131457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.131483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.131611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.131644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.131793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.131819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.131946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.131973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.132936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.132962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.133938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.133963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.134877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.134978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.135888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.135913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.136021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.136047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.136202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.136227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.136381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.136406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.695 qpair failed and we were unable to recover it. 00:37:19.695 [2024-07-21 08:33:29.136534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.695 [2024-07-21 08:33:29.136559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.136725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.136751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.136880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.136906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.137923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.137948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.138915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.138941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.139965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.139990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.140119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.140270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.140451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.140677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.140871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.140996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.141922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.141949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.696 [2024-07-21 08:33:29.142867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.696 [2024-07-21 08:33:29.142893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.696 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.143960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.143987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.144932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.144961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.145190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.145219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.145391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.145417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.145568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.145594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.145739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.145766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.145869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.145895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.146937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.146964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.147969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.147995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.148949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.148975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.149096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.149121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.149277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.149318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.697 [2024-07-21 08:33:29.149452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.697 [2024-07-21 08:33:29.149479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.697 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.149640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.149666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.149765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.149791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.149900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.149925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.150937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.150963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.151903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.151933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.152853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.152976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.153957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.153983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.154970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.154995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.155178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.155309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.155490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.155630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.698 [2024-07-21 08:33:29.155778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.698 qpair failed and we were unable to recover it. 00:37:19.698 [2024-07-21 08:33:29.155908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.155935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.156113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.156278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.156431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.156642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.156839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.156992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.157873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.157999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.158970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.158995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.159965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.159991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.160883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.160909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.161032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.161162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.699 [2024-07-21 08:33:29.161188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.699 qpair failed and we were unable to recover it. 00:37:19.699 [2024-07-21 08:33:29.161349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.161374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.161521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.161550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.161674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.161700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.161798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.161825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.161926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.161952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.162858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.162885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.163972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.163999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.164188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.164368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.164521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.164710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.164862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.164989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.165964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.165989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.166906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.166932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.167097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.167245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.167436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.167620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.700 [2024-07-21 08:33:29.167819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.700 qpair failed and we were unable to recover it. 00:37:19.700 [2024-07-21 08:33:29.167955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.167983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.168928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.168954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.169954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.169982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.170153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.170349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.170499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.170834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.170979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.171962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.171990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.172955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.172985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.173947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.173975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.174094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.174120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.174245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.174274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.174427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.174452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.174584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.701 [2024-07-21 08:33:29.174610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.701 qpair failed and we were unable to recover it. 00:37:19.701 [2024-07-21 08:33:29.174744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.174770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.174911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.174939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.175108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.175280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.175481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.175631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.175849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.175992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.176840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.176982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.177150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.177291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.177485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.177674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.177830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.177857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.178881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.178925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.179140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.179197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.179330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.179373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.179520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.179546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.179697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.179723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.179880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.179929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.180137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.180167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.180368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.180397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.180537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.180566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.180764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.180803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.180943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.180971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.181112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.181279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.181433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.181604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.702 [2024-07-21 08:33:29.181760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.702 qpair failed and we were unable to recover it. 00:37:19.702 [2024-07-21 08:33:29.181852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.181877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.182825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.182853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.183847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.183875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.184015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.184043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.184206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.184235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.184443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.184501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.184639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.184668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.184822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.184864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.185905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.185934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.186155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.186184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.186326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.186552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.186730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.186758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.186911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.186954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.187079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.187124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.187274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.187316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.187445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.187472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.187627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.187669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.187783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.187812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.703 qpair failed and we were unable to recover it. 00:37:19.703 [2024-07-21 08:33:29.188015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.703 [2024-07-21 08:33:29.188059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.188206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.188250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.188373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.188527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.188553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.188763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.188791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.188916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.188942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.189838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.189866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.190850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.190878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.191858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.191998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.192164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.192326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.192517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.192695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.704 qpair failed and we were unable to recover it. 00:37:19.704 [2024-07-21 08:33:29.192843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.704 [2024-07-21 08:33:29.192869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.192999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.193847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.193980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.194844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.194984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.195941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.195970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.196961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.196989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.197926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.197951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.198100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.198332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.198496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.198668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.198836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.198967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.199011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.199155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.199183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.199324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.199353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.199479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.199505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.705 [2024-07-21 08:33:29.199640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.705 [2024-07-21 08:33:29.199666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.705 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.199795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.199821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.199921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.199946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.200151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.200351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.200480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.200653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.200830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.200982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.201896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.201921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.202964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.202989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.203166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.203296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.203642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.203837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.203981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.204169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.204369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.204548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.204717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.204845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.204870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.205065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.205094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.205280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.205307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.205471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.205499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.205602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.205638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.205810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.205835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.206006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.206038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.206179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.206207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.206375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.206404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.206562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.706 [2024-07-21 08:33:29.206587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.706 qpair failed and we were unable to recover it. 00:37:19.706 [2024-07-21 08:33:29.206718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.206743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.206840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.206865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.206964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.206990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.207901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.207927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.208926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.209884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.209912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.210920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.210946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.211934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.212935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.212966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.213105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.213135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.213340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.707 [2024-07-21 08:33:29.213383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.707 qpair failed and we were unable to recover it. 00:37:19.707 [2024-07-21 08:33:29.213484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.213510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.213662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.213689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.213819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.213845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.213972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.213998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.214144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.214188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.214322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.214349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.214510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.214535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.214633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.214677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.214823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.214866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.215838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.215866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.216916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.216946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.217116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.217144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.217253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.217456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.217502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.217635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.217661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.217838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.217867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.218954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.218998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.219171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.219200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.708 [2024-07-21 08:33:29.219425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.708 [2024-07-21 08:33:29.219457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.708 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.219661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.219687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.219843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.220097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.220261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.220418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.220573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.220832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.220976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.221175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.221302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.221476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.221639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.221810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.221854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.222922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.222949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.223939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.223967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.224911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.224937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.225961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.225986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.226145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.226186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.226321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.709 [2024-07-21 08:33:29.226349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.709 qpair failed and we were unable to recover it. 00:37:19.709 [2024-07-21 08:33:29.226515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.226544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.226643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.226807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.226834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.226975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.227908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.227936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.228938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.228982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.229134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.229164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.229324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.229368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.229475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.229505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.229658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.229688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.229875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.229920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.230897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.230922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.231053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.231078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.231285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.231311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.231449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.231475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.231632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.231658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.231780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.231810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.232916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.232944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.233111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.233138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.233263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.710 [2024-07-21 08:33:29.233288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.710 qpair failed and we were unable to recover it. 00:37:19.710 [2024-07-21 08:33:29.233426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.233451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.233580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.233609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.233743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.233769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.233905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.233932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.234952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.234979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.235138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.235166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.235311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.235341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.235513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.235542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.235701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.235727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.235886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.235943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.236951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.236977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.237918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.237944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.238927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.238952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.239084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.239235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.239427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.239586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.711 [2024-07-21 08:33:29.239720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.711 qpair failed and we were unable to recover it. 00:37:19.711 [2024-07-21 08:33:29.239819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.239845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.239994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.240895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.240921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.241918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.241944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.242893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.242938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.243911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.243940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.244934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.244962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.245129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.712 qpair failed and we were unable to recover it. 00:37:19.712 [2024-07-21 08:33:29.245267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.712 [2024-07-21 08:33:29.245297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.245430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.245459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.245625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.245669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.245796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.245821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.245971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.246000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.246135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.246164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.246274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.713 [2024-07-21 08:33:29.246304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:19.713 qpair failed and we were unable to recover it. 00:37:19.713 [2024-07-21 08:33:29.246491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.997 [2024-07-21 08:33:29.246538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:19.997 qpair failed and we were unable to recover it. 00:37:19.997 [2024-07-21 08:33:29.246660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.997 [2024-07-21 08:33:29.246700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.997 qpair failed and we were unable to recover it. 00:37:19.997 [2024-07-21 08:33:29.246859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.997 [2024-07-21 08:33:29.246889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.997 qpair failed and we were unable to recover it. 00:37:19.997 [2024-07-21 08:33:29.247030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.997 [2024-07-21 08:33:29.247059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.997 qpair failed and we were unable to recover it. 00:37:19.997 [2024-07-21 08:33:29.247217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.247244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.247382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.247409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.247547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.247575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.247700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.247725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.247857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.247898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.248914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.248939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.249875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.249988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.250955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.250982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.251113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.251146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.251307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.251335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.998 qpair failed and we were unable to recover it. 00:37:19.998 [2024-07-21 08:33:29.251481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.998 [2024-07-21 08:33:29.251507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.251662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.251691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.251865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.251894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.252919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.252944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.253841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.253870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.254931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.254960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.255107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.255234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.255259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.255375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.999 [2024-07-21 08:33:29.255405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:19.999 qpair failed and we were unable to recover it. 00:37:19.999 [2024-07-21 08:33:29.255535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.255561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.255706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.255750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.255916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.255945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.256929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.256957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.257147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.257175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.257321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.257348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.257459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.257484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.257665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.257695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.257835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.257862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.258885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.258910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.000 [2024-07-21 08:33:29.259880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.000 [2024-07-21 08:33:29.259913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.000 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.260125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.260153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.260293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.260318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.260480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.260504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.260681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.260725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.260844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.260875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.261887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.262957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.263914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.263944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.264117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.264147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.264296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.264322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.264447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.001 [2024-07-21 08:33:29.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.001 qpair failed and we were unable to recover it. 00:37:20.001 [2024-07-21 08:33:29.264627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.264678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.264883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.264912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.265915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.265945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.266935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.266962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.267934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.267960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.002 [2024-07-21 08:33:29.268836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.002 qpair failed and we were unable to recover it. 00:37:20.002 [2024-07-21 08:33:29.268963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.268990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.269877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.269981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.270962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.270993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.271926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.271971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.272159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.272189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.272326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.272370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.272505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.272532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.272660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.272695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.003 [2024-07-21 08:33:29.272821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.003 [2024-07-21 08:33:29.272847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.003 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.273885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.273912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.274838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.274864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.275934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.004 [2024-07-21 08:33:29.275979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.004 qpair failed and we were unable to recover it. 00:37:20.004 [2024-07-21 08:33:29.276118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.276163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.276273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.276317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.276476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.276502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.276623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.276650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.276798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.276842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.277092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.277144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.277261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.277304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.277433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.277460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.277688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.277732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.277878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.277921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.278204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.278257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.278381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.278407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.278562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.278587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.278744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.278771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.278897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.278922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.279847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.279995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.280165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.280339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.280487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.280665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.280843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.005 [2024-07-21 08:33:29.280887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.005 qpair failed and we were unable to recover it. 00:37:20.005 [2024-07-21 08:33:29.281033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.281077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.281286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.281328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.281489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.281515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.281680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.281707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.281838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.281864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.282935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.282962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.283093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.283249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.283429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.283591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.283799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.283977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.284172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.284346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.284529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.284695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.284863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.284893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.285064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.285121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.285327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.285358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.285474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.285501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.285673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.006 [2024-07-21 08:33:29.285704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.006 qpair failed and we were unable to recover it. 00:37:20.006 [2024-07-21 08:33:29.285842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.285871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.286111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.286261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.286439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.286633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.286799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.286976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.287115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.287288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.287443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.287622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.287809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.287841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.288871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.288897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.289876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.289906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.290856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.290881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.291039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.291067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.291207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.291232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.291364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.291390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.007 [2024-07-21 08:33:29.291544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.007 [2024-07-21 08:33:29.291569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.007 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.291696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.291727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.291929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.291972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.292192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.292253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.292397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.292422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.292548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.292572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.292751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.292783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.292937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.292963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.293107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.293136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.293285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.293310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.293439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.293636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.293697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.293875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.293906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.294110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.294309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.294431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.294589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.294793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.294978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.295041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.295255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.295309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.295463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.295488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.295594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.295625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.295819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.295848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.296860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.296892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.297934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.297961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.298136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.298180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.298334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.298360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.298514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.298541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.298693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.298722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.298891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.298920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.299055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.008 [2024-07-21 08:33:29.299083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.008 qpair failed and we were unable to recover it. 00:37:20.008 [2024-07-21 08:33:29.299199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.299227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.299373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.299503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.299534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.299694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.299721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.299852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.299878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.300942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.300971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.301211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.301262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.301401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.301429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.301557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.301583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.301701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.301726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.301851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.301876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.302954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.302982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.303146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.303308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.303663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.303818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.303970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.304189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.304387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.304559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.304752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.304933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.304959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.009 qpair failed and we were unable to recover it. 00:37:20.009 [2024-07-21 08:33:29.305171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.009 [2024-07-21 08:33:29.305231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.305375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.305420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.305563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.305589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.305735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.305761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.305867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.305895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.306065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.306240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.306396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.306605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.306826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.306986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.307156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.307379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.307549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.307735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.307882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.307912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.308174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.308224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.308358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.308387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.308517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.308545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.308702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.308729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.308857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.309931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.309959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.310936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.310965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.311146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.311196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.311366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.311395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.311534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.311564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.311728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.311768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.311934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.311961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.312082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.312113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.312344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.312388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.312517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.010 [2024-07-21 08:33:29.312543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.010 qpair failed and we were unable to recover it. 00:37:20.010 [2024-07-21 08:33:29.312668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.312695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.312850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.312876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.313031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.313057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.313281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.313345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.313529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.313560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.313713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.313740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.313863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.313892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.314903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.314946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.315880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.315907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.316967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.316996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.317916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.317941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.318119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.318147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.318273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.318316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.011 [2024-07-21 08:33:29.318445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.011 [2024-07-21 08:33:29.318488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.011 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.318625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.318669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.318765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.318791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.318942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.318970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.319948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.319991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.320189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.320346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.320542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.320705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.320824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.320969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.321916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.321942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.322270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.322443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.322666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.012 [2024-07-21 08:33:29.322794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.012 qpair failed and we were unable to recover it. 00:37:20.012 [2024-07-21 08:33:29.322948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.322973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.323945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.323971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.324890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.324915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.325863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.325889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.326940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.326985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.327127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.013 [2024-07-21 08:33:29.327173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.013 qpair failed and we were unable to recover it. 00:37:20.013 [2024-07-21 08:33:29.327357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.327401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.327497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.327524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.327638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.327664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.327777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.327803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.327921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.327962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.328903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.328950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.329127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.329170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.329311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.329355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.329455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.329620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.329647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.329795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.329840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.330928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.330957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.331099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.331126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.331265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.331306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.331451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.331484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.331652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.331691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.331851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.014 [2024-07-21 08:33:29.331883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.014 qpair failed and we were unable to recover it. 00:37:20.014 [2024-07-21 08:33:29.332071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.332239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.332408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.332618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.332802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.332944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.332987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.333165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.333211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.333373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.333404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.333545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.333573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.333771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.333810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.334872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.334915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.335056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.335084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.335223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.335252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.335383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.335409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.015 [2024-07-21 08:33:29.335534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.015 [2024-07-21 08:33:29.335559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.015 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.335736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.335762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.335909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.335939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.336922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.336947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.337883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.337908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.338924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.338968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.016 qpair failed and we were unable to recover it. 00:37:20.016 [2024-07-21 08:33:29.339917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.016 [2024-07-21 08:33:29.339943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.340911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.340939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.341883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.341908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.342959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.342984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.343907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.343965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.344147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.344210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.344356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.017 [2024-07-21 08:33:29.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.017 qpair failed and we were unable to recover it. 00:37:20.017 [2024-07-21 08:33:29.344546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.344574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.344709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.344735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.344885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.344913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.345896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.345923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.346963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.346992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.347187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.347350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.347527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.347680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.347828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.347989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.348033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.348248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.348292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.348552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.348602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.348765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.348792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.348976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.349006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.349151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.349179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.349302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.349331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.018 [2024-07-21 08:33:29.349472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.018 [2024-07-21 08:33:29.349501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.018 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.349662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.349689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.349824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.349850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.349998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.350154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.350340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.350502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.350667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.350841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.350867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.351903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.351933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.352132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.352325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.352471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.352677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.352829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.352973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.353779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.019 [2024-07-21 08:33:29.353962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.019 [2024-07-21 08:33:29.354007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.019 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.354222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.354264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.354511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.354563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.354709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.354735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.354916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.354945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.355970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.355998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.356120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.356309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.356338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.356470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.356497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.356629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.356668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.356819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.356864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.357079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.357122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.357298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.357341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.357498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.357524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.357687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.357729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.357852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.357882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.358906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.358934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.359900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.359938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.020 qpair failed and we were unable to recover it. 00:37:20.020 [2024-07-21 08:33:29.360121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.020 [2024-07-21 08:33:29.360152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.360265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.360295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.360470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.360498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.360607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.360659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.360760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.360787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.360912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.360946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.361147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.361280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.361514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.361704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.361860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.361989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.362920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.362949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.363098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.363322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.363499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.363680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.363839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.363984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.364178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.364342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.364513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.364678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.364828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.364871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.021 [2024-07-21 08:33:29.365087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.021 [2024-07-21 08:33:29.365130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.021 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.365269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.365312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.365405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.365430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.365531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.365557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.365766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.365792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.365963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.366963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.366989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.367138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.367292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.367501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.367662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.367842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.367978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.368827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.368974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.369110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.369280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.369426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.369580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.369789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.369833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.370044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.370088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.370347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.370398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.370603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.370795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.370821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.371862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.371979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.372008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.372177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.022 qpair failed and we were unable to recover it. 00:37:20.022 [2024-07-21 08:33:29.372377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.022 [2024-07-21 08:33:29.372405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.372534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.372560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.372683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.372713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.372834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.372877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.372990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.373157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.373321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.373481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.373677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.373856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.373880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.374948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.374975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.375124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.375291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.375474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.375659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.375835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.375974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.376145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.376344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.376519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.376725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.376890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.376929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.377087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.377133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.377257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.377301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.377427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.377453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.377602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.377660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.377806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.377849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.378853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.378881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.379023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.379052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.379192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.379219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.023 qpair failed and we were unable to recover it. 00:37:20.023 [2024-07-21 08:33:29.379334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.023 [2024-07-21 08:33:29.379363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.379518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.379546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.379708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.379735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.379861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.379904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.380123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.380317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.380475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.380661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.380844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.380991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.381829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.381972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.382188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.382398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.382580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.382720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.382895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.382938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.383130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.383509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.383692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.383846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.383977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.384940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.384968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.385911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.385939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.386081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.386110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.386248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.386276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.024 qpair failed and we were unable to recover it. 00:37:20.024 [2024-07-21 08:33:29.386380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.024 [2024-07-21 08:33:29.386421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.386555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.386584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.386689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.386713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.386822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.386848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.386994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.387195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.387343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.387522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.387680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.387832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.387857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.388954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.388982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.389961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.389986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.390949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.390989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.391947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.391971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.392919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.392960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.393099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.025 [2024-07-21 08:33:29.393128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.025 qpair failed and we were unable to recover it. 00:37:20.025 [2024-07-21 08:33:29.393328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.393355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.393474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.393662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.393688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.393862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.393890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.394939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.394967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.395906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.395932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.396951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.396975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.397973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.397999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.398922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.398947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.026 [2024-07-21 08:33:29.399069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.026 [2024-07-21 08:33:29.399094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.026 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.399219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.399247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.399390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.399415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.399514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.399539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.399661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.399687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.399841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.399867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.400876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.400902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.401960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.401986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.402902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.402927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.403946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.403989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.404185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.404357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.404516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.404677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.404872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.404996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.405210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.405356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.405538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.405709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.405863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.027 [2024-07-21 08:33:29.405889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.027 qpair failed and we were unable to recover it. 00:37:20.027 [2024-07-21 08:33:29.406018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.406222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.406396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.406548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.406738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.406911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.406936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.407091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.407136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.407253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.407281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.407454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.407478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.407637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.407663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.407831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.407859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.408932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.408956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.409872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.409897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.410879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.410903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.411963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.411988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.412159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.412344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.412464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.412664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.028 [2024-07-21 08:33:29.412806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.028 qpair failed and we were unable to recover it. 00:37:20.028 [2024-07-21 08:33:29.412958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.412982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.413951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.413976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.414109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.414133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.414357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.414537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.414565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.414689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.414715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.414823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.414849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.415849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.415988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.416966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.416992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.417977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.418859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.418883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.419031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.419059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.419184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.419210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.029 [2024-07-21 08:33:29.419342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.029 [2024-07-21 08:33:29.419367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.029 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.419494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.419537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.419710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.419736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.419829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.419854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.420957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.420982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.421934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.421962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.422857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.422882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.423866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.423894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.030 [2024-07-21 08:33:29.424914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.030 [2024-07-21 08:33:29.424939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.030 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.425105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.425133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.425301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.425329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.425564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.425593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.425714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.425739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.425862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.425902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.426855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.426981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.427914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.427939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.428960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.428986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.429115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.429140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.429245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.429270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.429433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.429492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.429609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.429645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.429859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.429885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.430202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.430373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.430531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.430697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.430820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.430975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.431868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.031 [2024-07-21 08:33:29.431998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.031 [2024-07-21 08:33:29.432042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.031 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.432219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.432249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.432389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.432415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.432524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.432551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.432718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.432745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.432896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.432922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.433849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.433878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.434831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.434856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.435920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.435961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.436946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.436971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.437902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.437926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.438144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.438306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.438466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.438636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.438839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.032 [2024-07-21 08:33:29.438993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.032 [2024-07-21 08:33:29.439038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.032 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.439216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.439261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.439416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.439459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.439627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.439654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.439778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.439822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.439967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.440187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.440380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.440559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.440746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.440944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.440971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.441112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.441141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.441276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.441304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.441456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.441483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.441644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.441683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.441853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.441880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.442901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.442944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.443925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.443953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.444954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.444981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.445163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.445191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.445376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.445404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.445544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.445571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.445755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.445781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.445947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.445975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.446088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.446116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.446256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.446283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.033 qpair failed and we were unable to recover it. 00:37:20.033 [2024-07-21 08:33:29.446420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.033 [2024-07-21 08:33:29.446448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.446553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.446581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.446735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.446760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.446908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.446955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.447136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.447180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.447337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.447381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.447506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.447531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.447636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.447663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.447813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.447856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.448923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.448952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.449145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.449189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.449338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.449381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.449514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.449540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.449708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.449739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.449881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.449909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.450916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.450944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.451900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.451925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.452082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.452109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.452275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.452304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.452432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.452459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.034 [2024-07-21 08:33:29.452597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.034 [2024-07-21 08:33:29.452632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.034 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.452779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.452919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.452947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.453115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.453144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.453307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.453364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.453535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.453563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.453702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.453729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.453874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.453917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.454967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.454995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.455166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.455193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.455303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.455332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.455509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.455537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.455668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.455695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.455875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.455919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.456035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.456063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.456226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.456270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.456430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.456456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.456609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.456641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.456791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.456821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.457864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.457893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.458942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.458988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.035 qpair failed and we were unable to recover it. 00:37:20.035 [2024-07-21 08:33:29.459941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.035 [2024-07-21 08:33:29.459967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.460876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.460906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.461107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.461270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.461441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.461607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.461847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.461997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.462963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.463154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.463329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.463509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.463708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.463906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.463948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.464102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.464147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.464307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.464333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.464454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.464480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.464628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.464655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.464831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.464874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.465959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.465987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.466897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.036 [2024-07-21 08:33:29.466939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.036 qpair failed and we were unable to recover it. 00:37:20.036 [2024-07-21 08:33:29.467084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.467219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.467388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.467594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.467733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.467857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.467898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.468940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.468965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.469927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.469953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.470939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.470968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.471130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.471158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.471317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.471344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.471498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.471526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.471694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.471720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.471846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.471871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.472866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.472890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.473915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.473961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.474070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.474098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.474273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.474302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.474444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.037 [2024-07-21 08:33:29.474475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.037 qpair failed and we were unable to recover it. 00:37:20.037 [2024-07-21 08:33:29.474579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.474605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.474742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.474786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.474916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.474941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.475889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.475916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.476112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.476282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.476446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.476647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.476818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.476988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.477964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.477993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.478144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.478172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.478395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.478423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.478570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.478595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.478742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.478776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.478942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.478973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.479159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.479185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.479313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.479340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.479474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.479500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.479668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.479698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.479898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.479928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.480897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.480927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.481048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.481074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.481172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.038 [2024-07-21 08:33:29.481198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.038 qpair failed and we were unable to recover it. 00:37:20.038 [2024-07-21 08:33:29.481336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.481362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.481464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.481490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.481585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.481611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.481744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.481770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.481879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.481906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.482857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.482886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.483854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.483884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.484871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.484997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.485120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.485297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.485456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.485622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.485883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.485912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.486880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.486909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.487885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.487917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.488152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.488205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.488377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.039 [2024-07-21 08:33:29.488402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.039 qpair failed and we were unable to recover it. 00:37:20.039 [2024-07-21 08:33:29.488518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.488543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.488644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.488689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.488833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.488859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.488992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.489885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.489989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.490817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.491003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.491152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.491181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.491390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.491419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.491597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.491633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.491798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.491978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.492866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.492891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.493897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.493922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.494896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.494940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.495155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.495340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.495493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.495667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.495840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.495981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.496011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.496160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.496189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.040 qpair failed and we were unable to recover it. 00:37:20.040 [2024-07-21 08:33:29.496365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.040 [2024-07-21 08:33:29.496424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.496585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.496621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.496752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.496801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.496990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.497049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.497223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.497290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.497415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.497442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.497573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.497600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.497798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.497841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.497994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.498964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.498993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.499965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.499991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.500127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.500155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.500260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.500288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.500465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.500493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.500692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.500732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.500839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.500867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.501934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.501977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.502970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.502998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.503102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.503131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.503267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.503295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.503447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.503474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.503570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.503598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.503788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.503833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.504035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.504239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.504384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.504552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.041 [2024-07-21 08:33:29.504699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.041 qpair failed and we were unable to recover it. 00:37:20.041 [2024-07-21 08:33:29.504852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.504903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.505094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.505150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.505385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.505438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.505565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.505593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.505709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.505736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.505870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.505919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.506112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.506164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.506283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.506313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.506465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.506495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.506661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.506688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.506826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.506875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.507932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.507960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.508119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.508313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.508473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.508684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.508831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.508997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.509832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.509974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.510915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.510944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.511912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.511940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.042 [2024-07-21 08:33:29.512085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.042 [2024-07-21 08:33:29.512115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.042 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.512265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.512295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.512439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.512468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.512601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.512634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.512779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.512823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.513931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.513959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.514952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.514978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.515124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.515304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.515442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.515630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.515798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.515984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.516182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.516406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.516578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.516736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.516891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.516935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.517956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.517984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.518129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.518173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.518305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.518334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.518470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.518504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.518711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.518855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.518882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.519031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.519074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.519192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.519237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.519415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.519468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.519624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.519650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.519798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.519841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.520016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.520063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.520218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.520261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.520393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.520420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.520549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.520574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.043 qpair failed and we were unable to recover it. 00:37:20.043 [2024-07-21 08:33:29.520704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.043 [2024-07-21 08:33:29.520733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.520871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.520899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.521025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.521054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.521224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.521253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.521482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.521510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.521668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.521695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.521843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.522925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.522971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.523184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.523343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.523694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.523845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.523989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.524936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.524980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.525172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.525348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.525875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.525995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.526964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.526991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.527133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.527161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.527333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.527389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.527531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.527559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.527697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.527725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.527852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.527878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.528091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.528147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.528333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.528364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.528513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.528656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.528685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.528833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.528858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.529065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.529128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-21 08:33:29.529394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.044 [2024-07-21 08:33:29.529446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.529594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.529628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.529767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.529795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.529927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.529956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.530149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.530214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.530382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.530407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.530531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.530556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.530669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.530699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.530839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.530871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.531847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.531975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.532897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.532935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.533950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.533976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.534911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.534937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.535856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.535884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.536921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.536946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.537051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.537075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.537167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.537191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.537297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.537326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.537465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.537493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-21 08:33:29.537628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.045 [2024-07-21 08:33:29.537656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.537782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.537808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.537942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.537968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.538967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.538992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.539894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.539987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.540932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.540957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.541895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.541921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.542849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.542878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.543089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.543117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.543320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.543370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.543508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.046 [2024-07-21 08:33:29.543536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-21 08:33:29.543680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.543706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.543811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.543842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.543997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.544935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.544961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.545947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.545972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.546877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.546975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.547161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.547375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.547566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.547748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.547904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.547929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.548890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.548917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.549075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.549101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.549231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.047 [2024-07-21 08:33:29.549258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.047 qpair failed and we were unable to recover it. 00:37:20.047 [2024-07-21 08:33:29.549387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.549413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.549518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.549545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.549652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.549678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.549798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.549823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.549922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.549948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.550938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.550964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.551852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.551879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.552877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.552907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.553858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.554898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.554941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.048 qpair failed and we were unable to recover it. 00:37:20.048 [2024-07-21 08:33:29.555850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.048 [2024-07-21 08:33:29.555878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.556961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.556992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.557189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.557319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.557498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.557677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.557808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.557980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.558870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.558989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.559214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.559395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.559549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.559707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.559892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.559936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.560841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.560868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.561963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.561993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.562165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.562361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.562538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.562682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.562886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.049 [2024-07-21 08:33:29.562998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.049 [2024-07-21 08:33:29.563023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.049 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.563131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.563156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.563315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.563343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.563473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.563500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.563602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.563634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.563775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.563808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.564874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.564903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.565941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.565966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.566879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.567931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.567961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.568128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.568154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.568256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.568282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.568434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.568465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.050 qpair failed and we were unable to recover it. 00:37:20.050 [2024-07-21 08:33:29.568559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.050 [2024-07-21 08:33:29.568584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.568735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.568765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.568938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.568967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.569102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.569131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.569274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.569300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.569426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.569454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.569596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.569646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.569812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.569843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.570088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.570246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.570455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.570658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.570850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.570986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.571873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.571898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.572935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.572966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.573926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.573952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.574851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.574879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.575011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.575037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.575198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.575224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.051 qpair failed and we were unable to recover it. 00:37:20.051 [2024-07-21 08:33:29.575375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.051 [2024-07-21 08:33:29.575401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.575521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.575549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.575709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.575735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.575864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.575890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.576946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.576972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.577899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.577926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.578888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.578914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.579949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.579975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.580877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.580904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.581028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.581054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.581181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.581206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.581304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.581330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.581464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.581490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.052 qpair failed and we were unable to recover it. 00:37:20.052 [2024-07-21 08:33:29.581626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.052 [2024-07-21 08:33:29.581653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.581779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.581805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.581896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.581921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.582819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.582976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.583165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.583289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.583482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.583668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.583849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.583877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.584094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.584152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.584441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.584490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.584601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.584636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.584754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.584778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.584866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.584891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.585931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.585959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.586937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.586962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.587926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.587955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.588087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.588112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.588236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.588262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.053 [2024-07-21 08:33:29.588359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.053 [2024-07-21 08:33:29.588387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.053 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.588514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.588540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.588638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.588664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.588802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.588827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.588923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.588950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.589939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.589965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.590880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.590978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.591897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.591922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.592899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.592991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.593114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.593245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.593414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.593569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.593796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.593824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.594025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.594054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.054 [2024-07-21 08:33:29.594257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.054 [2024-07-21 08:33:29.594292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.054 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.594420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.594449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.594619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.594648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.594771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.594797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.594918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.594944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.595924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.595949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.596967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.596995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.597962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.597987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.598907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.598946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.599057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.599083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.599211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.599235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.055 qpair failed and we were unable to recover it. 00:37:20.055 [2024-07-21 08:33:29.599337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.055 [2024-07-21 08:33:29.599362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.599489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.599514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.599612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.599647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.599770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.599795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.599893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.599918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.600098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.600260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.600490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.600853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.600981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.056 [2024-07-21 08:33:29.601904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.056 [2024-07-21 08:33:29.601929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.056 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.602092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.602118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.602218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.602243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.602372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.602399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.602573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.602626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.602768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.602796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.603917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.603942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.604095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.604120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.604211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.604237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.604364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.604388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.354 qpair failed and we were unable to recover it. 00:37:20.354 [2024-07-21 08:33:29.604486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.354 [2024-07-21 08:33:29.604511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.604651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.604676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.604778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.604803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.604892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.604916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.605912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.605939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.606936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.606960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.607863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.607900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.608930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.608956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.609889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.609997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.610024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.610178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.610204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.610332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.610359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.610490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.355 [2024-07-21 08:33:29.610659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.355 [2024-07-21 08:33:29.610698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.355 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.610803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.610829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.610957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.610982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.611843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.611869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.612918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.612946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.613129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.613156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.613383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.613436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.613602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.613637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.613782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.613932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.613963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.614101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.614169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.614326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.614355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.614475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.614500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.614672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.614699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.614873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.614898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.615087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.615264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.615411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.615581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.615766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.615997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.616268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.616473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.616622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.616800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.616960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.616990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.617188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.617217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.617359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.617385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.617515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.617542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.617678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.617704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.617832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.617857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.356 [2024-07-21 08:33:29.618002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.356 [2024-07-21 08:33:29.618030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.356 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.618951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.619932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.619974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.620165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.620350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.620481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.620688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.620838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.620980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.621151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.621322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.621475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.621633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.621812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.621869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.622899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.623916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.623946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.624084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.624113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.624258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.624288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.624463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.624491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.624647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.624674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.624830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.624860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.357 qpair failed and we were unable to recover it. 00:37:20.357 [2024-07-21 08:33:29.625017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.357 [2024-07-21 08:33:29.625061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.625174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.625203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.625346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.625371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.625492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.625518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.625659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.625703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.625852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.625881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.626952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.626982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.627937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.627963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.628865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.628894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.629826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.629992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.630020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.630163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.630191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.630335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.630363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.630481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.358 [2024-07-21 08:33:29.630510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.358 qpair failed and we were unable to recover it. 00:37:20.358 [2024-07-21 08:33:29.630683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.630711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.630815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.630842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.630978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.631157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.631353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.631534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.631713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.631889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.631917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.632852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.632879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.633091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.633326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.633499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.633686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.633843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.633973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.634864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.634987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.635028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.635164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.635192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.635369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.635398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.635565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.635594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.635797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.635836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.635975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.636956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.636982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.637130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.637164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.637317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.637347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.637532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.359 [2024-07-21 08:33:29.637572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.359 qpair failed and we were unable to recover it. 00:37:20.359 [2024-07-21 08:33:29.637714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.637742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.637882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.637926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.638096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.638139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.638285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.638329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.638484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.638509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.638607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.638655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.638838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.638868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.639904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.639947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.640124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.640168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.640319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.640362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.640494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.640520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.640650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.640682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.640814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.640857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.641816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.641841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.642888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.642914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.643916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.644126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.644155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.644287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.644320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.644440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.644467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.644625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.644651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.360 [2024-07-21 08:33:29.644739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.360 [2024-07-21 08:33:29.644764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.360 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.644884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.644909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.645930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.645973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.646150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.646357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.646494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.646674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.646824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.646995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.647854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.647996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.648155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.648319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.648498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.648628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.648813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.648854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.649876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.649919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.650967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.650992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.651173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.651345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.651372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.651532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.651558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.651685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.651711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.361 qpair failed and we were unable to recover it. 00:37:20.361 [2024-07-21 08:33:29.651837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.361 [2024-07-21 08:33:29.651862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.652863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.652888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.653061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.653227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.653394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.653591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.653812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.653997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.654042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.654283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.654327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.654525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.654577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.654709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.654736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.654854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.654884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.655106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.655322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.655482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.655640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.655995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.656923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.656948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.657908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.657936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.658100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.658271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.658511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.658661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.658821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.658985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.659011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.362 [2024-07-21 08:33:29.659139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.362 [2024-07-21 08:33:29.659165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.362 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.659267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.659294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.659425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.659452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.659605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.659637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.659762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.659787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.659951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.659979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.660141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.660482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.660662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.660847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.660988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.661847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.661991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.662212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.662413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.662593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.662779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.662940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.662969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.663162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.663319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.663362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.663493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.663519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.663694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.663724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.663863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.663891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.664952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.664980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.665120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.665148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.363 qpair failed and we were unable to recover it. 00:37:20.363 [2024-07-21 08:33:29.665309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.363 [2024-07-21 08:33:29.665337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.665474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.665503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.665650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.665677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.665818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.665845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.665950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.665978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.666908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.666953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.667079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.667122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.667309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.667339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.667452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.667481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.667670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.667695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.667819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.667865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.668883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.668928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.669902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.669928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.670866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.670894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.671063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.671233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.671448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.671634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.671817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.671990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.672033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.672154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.672197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.364 qpair failed and we were unable to recover it. 00:37:20.364 [2024-07-21 08:33:29.672344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.364 [2024-07-21 08:33:29.672387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.672493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.672520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.672650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.672676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.672799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.672826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.672962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.672989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.673914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.673959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.674107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.674150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.674328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.674375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.674501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.674527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.674654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.674684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.674839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.674883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.675880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.675905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.676956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.676981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.677926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.677968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.678947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.365 [2024-07-21 08:33:29.678978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.365 qpair failed and we were unable to recover it. 00:37:20.365 [2024-07-21 08:33:29.679130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.679173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.679323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.679350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.679500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.679526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.679676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.679705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.679882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.679911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.680907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.680936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.681939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.681968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.682170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.682199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.682339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.682364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.682493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.682680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.682709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.682875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.682904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.683869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.683895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.684928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.684953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.685867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.366 [2024-07-21 08:33:29.685899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.366 qpair failed and we were unable to recover it. 00:37:20.366 [2024-07-21 08:33:29.686047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.686089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.686220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.686245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.686397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.686423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.686547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.686573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.686757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.686786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.686979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.687263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.687404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.687531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.687682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.687837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.687863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.688855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.688880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.689827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.689853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.690839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.690865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.691012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.691042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.691190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.691217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.691321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.691348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.367 [2024-07-21 08:33:29.691494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.367 [2024-07-21 08:33:29.691534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.367 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.691672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.691705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.691841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.691868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.691992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.692179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.692328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.692515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.692694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.692847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.692874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.693859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.693988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.694146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.694330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.694509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.694694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.694848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.694873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.695882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.695910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.696082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.696110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.696286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.696315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.696445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.696474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.696661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.696701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.696876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.696945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.697095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.697302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.697459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.697626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.697819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.697968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.698012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.698187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.698231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.698385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.698411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.698571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.698596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.368 qpair failed and we were unable to recover it. 00:37:20.368 [2024-07-21 08:33:29.698746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.368 [2024-07-21 08:33:29.698789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.698911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.698954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.699068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.699096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.699288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.699334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.699486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.699515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.699663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.699692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.699859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.699887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.700900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.700929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.701969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.701995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.702175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.702352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.702529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.702655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.702845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.702985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.703944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.703988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.704167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.704319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.704443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.704597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.704794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.704959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.705002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.705150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.705179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.705346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.705375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.705521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.369 [2024-07-21 08:33:29.705546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.369 qpair failed and we were unable to recover it. 00:37:20.369 [2024-07-21 08:33:29.705673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.705700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.705853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.705895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.706909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.706938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.707868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.707895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.708884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.708910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.709869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.709895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.710044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.710216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.710812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.710992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.711212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.711366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.711507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.711675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.711858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.711884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.712014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.712039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.712132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.712158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.712265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.712290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.370 [2024-07-21 08:33:29.712413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.370 [2024-07-21 08:33:29.712438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.370 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.712605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.712827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.712857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.712995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.713194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.713362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.713519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.713678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.713886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.713929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.714258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.714420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.714594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.714791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.714970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.715163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.715338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.715484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.715681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.715867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.715911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.716894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.717957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.717985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.718126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.718154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.718297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.718326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.371 [2024-07-21 08:33:29.718527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.371 [2024-07-21 08:33:29.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.371 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.718643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.718669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.718793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.718820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.718959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.718989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.719943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.719979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.720953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.720979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.721920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.721947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.722855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.722883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.723891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.723920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.724062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.724090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.724229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.724258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.724427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.724473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.724649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.724675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.724823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.724868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.725041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.725084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.725229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.725257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.372 qpair failed and we were unable to recover it. 00:37:20.372 [2024-07-21 08:33:29.725413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.372 [2024-07-21 08:33:29.725440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.725568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.725596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.725738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.725765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.725941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.725969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.726910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.726937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.727100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.727297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.727465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.727663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.727830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.727981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.728170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.728367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.728534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.728716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.728874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.728900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.729111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.729319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.729475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.729644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.729823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.729977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.730151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.730315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.730522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.730688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.730845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.730887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.731964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.731994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.732134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.732160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.732313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.373 [2024-07-21 08:33:29.732339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.373 qpair failed and we were unable to recover it. 00:37:20.373 [2024-07-21 08:33:29.732444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.732470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.732592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.732622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.732744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.732775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.732955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.732981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.733162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.733188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.733312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.733338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.733484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.733510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.733625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.733674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.733846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.733877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.734864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.734892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.735967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.735995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.736900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.736926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.737055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.737237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.737425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.737580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.737763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.737968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.738959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.738984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.739083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.739109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.739259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.374 [2024-07-21 08:33:29.739285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.374 qpair failed and we were unable to recover it. 00:37:20.374 [2024-07-21 08:33:29.739411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.739437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.739566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.739593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.739723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.739752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.739918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.739948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.740208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.740238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.740382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.740408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.740564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.740594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.740725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.740755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.740927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.740955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.741910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.741936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.742903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.742929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.743888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.743987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.744972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.744999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.745929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.745955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.375 [2024-07-21 08:33:29.746087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.375 [2024-07-21 08:33:29.746113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.375 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.746236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.746261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.746382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.746410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.746634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.746678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.746806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.746831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.746923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.746949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.747847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.747873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.748921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.748947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.749907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.749932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.750916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.750943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.751093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.751119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.751273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.751299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.751403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.751429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.751527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.376 [2024-07-21 08:33:29.751553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.376 qpair failed and we were unable to recover it. 00:37:20.376 [2024-07-21 08:33:29.751680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.751707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.751843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.751868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.751996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.752857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.752986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.753898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.753923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.754918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.754945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.755119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.755147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.755348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.755377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.755579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.755787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.755816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.756060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.756295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.756542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.756719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.756872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.756997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.757866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.757975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.758002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.758133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.758158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.758286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.377 [2024-07-21 08:33:29.758312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.377 qpair failed and we were unable to recover it. 00:37:20.377 [2024-07-21 08:33:29.758433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.758462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.758602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.758638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.758781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.758807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.758932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.758957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.759887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.759912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.760883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.760909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.761910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.761936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.762933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.762959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.763959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.763988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.764146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.764189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.764390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.764420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.764596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.764634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.378 [2024-07-21 08:33:29.764784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.378 [2024-07-21 08:33:29.764810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.378 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.764937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.764962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.765876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.766925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.766950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.767943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.767969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.768931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.768956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.769911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.769941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.770150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.770336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.770510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.770702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.770865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.770991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.771017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.771120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.771146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.379 [2024-07-21 08:33:29.771249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.379 [2024-07-21 08:33:29.771276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.379 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.771374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.771400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.771530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.771556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.771684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.771711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.771841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.771867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.772843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.772980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.773966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.773992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.774149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.774330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.774495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.774669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.774853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.774987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.775965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.775990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.776114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.776140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.776232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.776257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.776374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.776400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.776571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.776599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.380 qpair failed and we were unable to recover it. 00:37:20.380 [2024-07-21 08:33:29.776721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.380 [2024-07-21 08:33:29.776747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.776901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.776931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.777927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.777953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.778140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.778287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.778489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.778699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.778985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.779882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.779985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.780181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.780350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.780542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.780724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.780873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.780899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.781921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.781946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.782073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.782100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.782237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.782264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.782386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.782414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.782584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.782618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.782783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.782812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.783093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.783145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.381 [2024-07-21 08:33:29.783310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.381 [2024-07-21 08:33:29.783339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.381 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.783478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.783513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.783662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.783688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.783816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.783842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.783974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.784826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.784980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.785882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.785908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.786937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.786963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.787120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.787270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.787452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.787865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.787997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.788955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.788982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.789900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.789927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.382 qpair failed and we were unable to recover it. 00:37:20.382 [2024-07-21 08:33:29.790054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.382 [2024-07-21 08:33:29.790084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.790934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.790959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.791909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.791938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.792928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.792954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.793106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.793132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.793231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.793258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.793414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.793444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.793554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.793582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.793723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.793753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.794850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.794876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.795805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.795830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.796023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.796052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.796223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.796252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.796391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.796626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.796671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.383 qpair failed and we were unable to recover it. 00:37:20.383 [2024-07-21 08:33:29.796827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.383 [2024-07-21 08:33:29.796860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.797073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.797127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.797345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.797371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.797523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.797552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.797703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.797732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.797871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.797899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.798075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.798103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.798273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.798298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.798460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.798485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.798587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.798617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.798763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.798792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.799851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.799877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.800874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.801781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.801806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.802035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.802180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.802324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.802521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.384 [2024-07-21 08:33:29.802799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.384 qpair failed and we were unable to recover it. 00:37:20.384 [2024-07-21 08:33:29.802935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.803904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.804868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.804983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.805950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.805976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.806855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.806983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.807938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.807964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.808869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.808996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.809021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.809177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.385 [2024-07-21 08:33:29.809202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.385 qpair failed and we were unable to recover it. 00:37:20.385 [2024-07-21 08:33:29.809321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.809347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.809448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.809475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.809583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.809610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.809744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.809769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.809877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.809908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.810864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.810889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.811962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.811988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.812866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.812891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.813839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.813865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.814953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.814979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.815124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.815149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.815278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.815303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.815457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.386 [2024-07-21 08:33:29.815485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.386 qpair failed and we were unable to recover it. 00:37:20.386 [2024-07-21 08:33:29.815637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.815663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.815769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.815795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.815955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.815980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.816853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.816880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.817840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.817866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.818911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.818937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.819968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.820965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.820994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.821136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.821163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.821384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.821414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.821566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.821593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.821747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.821791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.821909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.821953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.822121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.822164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.387 [2024-07-21 08:33:29.822289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.387 [2024-07-21 08:33:29.822333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.387 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.822487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.822514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.822689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.822718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.822832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.822859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.823938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.823982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.824954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.824979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.825927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.825955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.826149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.826287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.826432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.826635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.826808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.826979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.827939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.827964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.828845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.388 qpair failed and we were unable to recover it. 00:37:20.388 [2024-07-21 08:33:29.828987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.388 [2024-07-21 08:33:29.829014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.829942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.829985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.830180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.830224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.830402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.830446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.830541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.830568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.830692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.830737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.830872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.830899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.831863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.831889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.832836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.833069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.833122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.833308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.833367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.833481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.833510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.833659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.833685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.833837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.389 [2024-07-21 08:33:29.833862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.389 qpair failed and we were unable to recover it. 00:37:20.389 [2024-07-21 08:33:29.834019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.834218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.834409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.834583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.834740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.834895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.834920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.835912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.835939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.836126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.836155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.836378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.836406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.836544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.836572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.836714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.836741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.836868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.836897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.837832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.837961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.838141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.838368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.838535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.838722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.838872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.838898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.839930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.839959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.840094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.840123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.840220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.840248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.840424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.840486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.840627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.840655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.840813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.840839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.841031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.390 [2024-07-21 08:33:29.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.390 qpair failed and we were unable to recover it. 00:37:20.390 [2024-07-21 08:33:29.841186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.841212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.841346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.841372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.841499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.841525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.841677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.841703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.841798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.841824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.841976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.842843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.842996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.843194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.843362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.843568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.843735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.843905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.843934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.844934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.844962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.845840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.845866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.846092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.846299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.846483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.846690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.846842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.846984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.847942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.847968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.391 qpair failed and we were unable to recover it. 00:37:20.391 [2024-07-21 08:33:29.848173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.391 [2024-07-21 08:33:29.848240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.848347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.848380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.848521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.848550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.848697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.848723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.848848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.848873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.849880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.849905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.850033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.850061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.850229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.850257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.850400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.850428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.850567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.850597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.850795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.850835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.851021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.851066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.851251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.851295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.851453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.851607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.851645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.851823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.851867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.852915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.852944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.853226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.853276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.853384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.853415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.853547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.853575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.853703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.853734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.853852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.853881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.854883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.854909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.855060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.855089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.855282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.855331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.855460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.855486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.392 qpair failed and we were unable to recover it. 00:37:20.392 [2024-07-21 08:33:29.855658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.392 [2024-07-21 08:33:29.855687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.855884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.855913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.856875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.856974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.857140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.857335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.857502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.857678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.857845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.857891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.858937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.858966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.859159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.859188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.859366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.859391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.859517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.859543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.859673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.859791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.859835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.860879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.860908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.861027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.861052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.861210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.861238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.861351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.861379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.861525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.861551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.393 [2024-07-21 08:33:29.861652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.393 [2024-07-21 08:33:29.861679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.393 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.861805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.861830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.861997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.862131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.862277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.862515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.862699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.862868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.862912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.863963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.863988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.864943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.864971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.865919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.866847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.866891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.867878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.867921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.868041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.868083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.868203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.868232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.868339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.868367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.394 qpair failed and we were unable to recover it. 00:37:20.394 [2024-07-21 08:33:29.868527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.394 [2024-07-21 08:33:29.868556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.868688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.868714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.868867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.868892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.868989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.869146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.869312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.869515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.869685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.869872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.869898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.870884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.870909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.871058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.871087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.871249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.871278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.871421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.871449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.871620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.871665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.871789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.871819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.872857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.872999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.873163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.873356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.873536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.873692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.873861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.873904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.874089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.874133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.874302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.874333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.874479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.874507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.874663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.874690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.874873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.874901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.875010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.875052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.875204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.875232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.875397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.875426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.875535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.395 [2024-07-21 08:33:29.875565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.395 qpair failed and we were unable to recover it. 00:37:20.395 [2024-07-21 08:33:29.875753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.875778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.875956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.875984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.876114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.876142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.876286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.876314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.876473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.876503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.876649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.876677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.876802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.876831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.877863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.877978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.878202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.878372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.878526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.878653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.878860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.878905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.879846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.879871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.880836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.880864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.881916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.881959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.882072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.882116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.882210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.882237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.882368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.882394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.882556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.396 [2024-07-21 08:33:29.882582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.396 qpair failed and we were unable to recover it. 00:37:20.396 [2024-07-21 08:33:29.882727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.882753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.882896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.883909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.883953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.884967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.884996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.885154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.885215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.885318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.885348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.885467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.885497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.885668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.885695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.885820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.885868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.886895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.886940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.397 [2024-07-21 08:33:29.887989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.397 qpair failed and we were unable to recover it. 00:37:20.397 [2024-07-21 08:33:29.888158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.888184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.888321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.888349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.888480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.888506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.888652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.888682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.888823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.888851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.889887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.889988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.890839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.890958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.891002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.891183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.891226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.891354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.891402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.891557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.891583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.891737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.398 [2024-07-21 08:33:29.891782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.398 qpair failed and we were unable to recover it. 00:37:20.398 [2024-07-21 08:33:29.891927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.891973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.892125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.892169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.892323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.892352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.892499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.892526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.892659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.892685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.892818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.892843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.893814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.893972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.894851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.894878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.895883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.895989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.896015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.896111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.896136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.896277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.896305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.896395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.896421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.896549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.399 [2024-07-21 08:33:29.896575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.399 qpair failed and we were unable to recover it. 00:37:20.399 [2024-07-21 08:33:29.896712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.896737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.896861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.896903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.897851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.897876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.898874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.898900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.899925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.899965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.900973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.900999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.901147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.901175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.901365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.901393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.901540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.901566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.901692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.901718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.901849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.901911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.902057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.902101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.400 qpair failed and we were unable to recover it. 00:37:20.400 [2024-07-21 08:33:29.902254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.400 [2024-07-21 08:33:29.902298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.902453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.902479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.902607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.902648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.902792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.902836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.902999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.903914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.903943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.904050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.904078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.904255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.904284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.904559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.904588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.904745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.904771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.904877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.904919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.905197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.905363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.905530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.905712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.905867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.905997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.906963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.906991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.907159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.907308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.907509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.907697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.401 [2024-07-21 08:33:29.907823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.401 qpair failed and we were unable to recover it. 00:37:20.401 [2024-07-21 08:33:29.907950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.907975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.908143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.908185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.908351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.908380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.908547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.908575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.908727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.908753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.908856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.908895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.909049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.909075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.909276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.909305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.909470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.909498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.909686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.909712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.909841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.909867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.910920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.910947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.911144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.911315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.911503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.911715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.911848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.911972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.912857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.912992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.913033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.913171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.913200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.913348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.913376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.913519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.913545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.402 [2024-07-21 08:33:29.913646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.402 [2024-07-21 08:33:29.913672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.402 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.913805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.913830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.913942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.913970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.914929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.914955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.915957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.915985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.916163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.916353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.916501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.916701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.916857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.916982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.917925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.403 [2024-07-21 08:33:29.917950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.403 qpair failed and we were unable to recover it. 00:37:20.403 [2024-07-21 08:33:29.918117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.918327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.918524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.918674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.918826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.918956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.918982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.919112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.919141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.919302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.919327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.919464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.919489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.919659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.919685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.919827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.919852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.920875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.920976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.921895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.921924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.922867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.922991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.923017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.923167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.923195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.923379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.923404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.923545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.923573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.923744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.404 [2024-07-21 08:33:29.923773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.404 qpair failed and we were unable to recover it. 00:37:20.404 [2024-07-21 08:33:29.923913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.923942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.924925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.924967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.925889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.925915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.926883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.926908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.927870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.927895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.928830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.928992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.929017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.929110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.405 [2024-07-21 08:33:29.929135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.405 qpair failed and we were unable to recover it. 00:37:20.405 [2024-07-21 08:33:29.929232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.929257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.929385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.929414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.929571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.929599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.929716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.929745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.929858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.929884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.929991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.930922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.930950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.931898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.931924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.932936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.932961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.933882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.933907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.934054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.934209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.406 [2024-07-21 08:33:29.934234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.406 qpair failed and we were unable to recover it. 00:37:20.406 [2024-07-21 08:33:29.934362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.934387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.934555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.934583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.934717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.934759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.934890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.934917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.935854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.935880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.936934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.936960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.937834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.937860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.938866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.938892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.939017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.939042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.939142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.939167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.939295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.939321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.939474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.407 [2024-07-21 08:33:29.939499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.407 qpair failed and we were unable to recover it. 00:37:20.407 [2024-07-21 08:33:29.939652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.939678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.939777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.939802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.939921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.939946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.940971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.940996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.941836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.941861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.942962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.942988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.943937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.943964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.944069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.944099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.944241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.408 [2024-07-21 08:33:29.944266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.408 qpair failed and we were unable to recover it. 00:37:20.408 [2024-07-21 08:33:29.944389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.409 [2024-07-21 08:33:29.944415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.409 qpair failed and we were unable to recover it. 00:37:20.409 [2024-07-21 08:33:29.944545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.409 [2024-07-21 08:33:29.944570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.409 qpair failed and we were unable to recover it. 00:37:20.409 [2024-07-21 08:33:29.944726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.409 [2024-07-21 08:33:29.944753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.409 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.944908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.944934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.945901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.945928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.946928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.946954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 84470 Killed "${NVMF_APP[@]}" "$@" 00:37:20.695 [2024-07-21 08:33:29.947798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 [2024-07-21 08:33:29.947952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.947981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:20.695 [2024-07-21 08:33:29.948083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.948109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:20.695 [2024-07-21 08:33:29.948228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.695 [2024-07-21 08:33:29.948253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.695 qpair failed and we were unable to recover it. 00:37:20.695 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:20.696 [2024-07-21 08:33:29.948380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.948407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:20.696 [2024-07-21 08:33:29.948557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.948583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.948717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.948743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.948892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.948919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.949909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.949935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.950116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.950257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.950437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.950692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.950844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.950978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.951888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.951927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.952034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.952061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=85017 00:37:20.696 [2024-07-21 08:33:29.952170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.952195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 85017 00:37:20.696 [2024-07-21 08:33:29.952314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.952341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:20.696 [2024-07-21 08:33:29.952476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 85017 ']' 00:37:20.696 [2024-07-21 08:33:29.952508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.696 [2024-07-21 08:33:29.952679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.952705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:20.696 [2024-07-21 08:33:29.952805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.952831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.696 [2024-07-21 08:33:29.952947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:20.696 [2024-07-21 08:33:29.952972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 08:33:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:20.696 [2024-07-21 08:33:29.953099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.953125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.953257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.953288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.953397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.953425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.953530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.953555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.953669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.696 [2024-07-21 08:33:29.953708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.696 qpair failed and we were unable to recover it. 00:37:20.696 [2024-07-21 08:33:29.953817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.953845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.953978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.954938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.954964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.955901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.955927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.956888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.956920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.957960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.957990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.958967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.958996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.959881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.959910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.960081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.960110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.697 [2024-07-21 08:33:29.960231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.697 [2024-07-21 08:33:29.960257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.697 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.960387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.960412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.960540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.960566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.960715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.960747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.960852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.960881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.961078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.961107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.961258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.961285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.961413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.961438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.961594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.961627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.961779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.961807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.962869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.962897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.963896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.963924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.964835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.964985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.965792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.965986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.966953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.966979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.698 qpair failed and we were unable to recover it. 00:37:20.698 [2024-07-21 08:33:29.967077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.698 [2024-07-21 08:33:29.967104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.967923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.967949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.968933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.968962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.969168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.969196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.969337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.969364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.969518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.969545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.969677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.969706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.969848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.969876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.970874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.970901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.971916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.971942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.972038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.972065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.972197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.972223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.699 qpair failed and we were unable to recover it. 00:37:20.699 [2024-07-21 08:33:29.972353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.699 [2024-07-21 08:33:29.972379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.972479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.972505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.972644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.972687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.972824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.972851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.972954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.972980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.973950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.973976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.974943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.974969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.975834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.975982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.976866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.976993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.977890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.977916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.978008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.978034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.978168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.700 [2024-07-21 08:33:29.978194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.700 qpair failed and we were unable to recover it. 00:37:20.700 [2024-07-21 08:33:29.978304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.978329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.978455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.978481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.978593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.978626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.978756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.978782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.978915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.978941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.979900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.979925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.980909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.980934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.981948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.981974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.982865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.983897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.983924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.701 qpair failed and we were unable to recover it. 00:37:20.701 [2024-07-21 08:33:29.984087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.701 [2024-07-21 08:33:29.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.984235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.984362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.984513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.984707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.984826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.984986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.985886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.985912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.986954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.986980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.987882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.987907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.988004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.988029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.702 qpair failed and we were unable to recover it. 00:37:20.702 [2024-07-21 08:33:29.988178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.702 [2024-07-21 08:33:29.988204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.988306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.988332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.988456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.988482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.988602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.988643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.988743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.988772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.988930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.988955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.989942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.989967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.990950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.990975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.991908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.703 [2024-07-21 08:33:29.991934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.703 qpair failed and we were unable to recover it. 00:37:20.703 [2024-07-21 08:33:29.992036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.992888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.992914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.993929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.993955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.994885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.994912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.704 [2024-07-21 08:33:29.995767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.704 [2024-07-21 08:33:29.995792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.704 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.995898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.995924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996065] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:37:20.705 [2024-07-21 08:33:29.996103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996138] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:20.705 [2024-07-21 08:33:29.996228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.996962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.996987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.997969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.997995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.998133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.998158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.998290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.998315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.998444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.998471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.998606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.705 [2024-07-21 08:33:29.998639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.705 qpair failed and we were unable to recover it. 00:37:20.705 [2024-07-21 08:33:29.998740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.998766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.998885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.998911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:29.999899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:29.999925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.000963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.000989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.001850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.001876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.002006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.002032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.002145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.002171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.002308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.002334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.002445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.706 [2024-07-21 08:33:30.002471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.706 qpair failed and we were unable to recover it. 00:37:20.706 [2024-07-21 08:33:30.002573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.002603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.002737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.002764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.002905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.002931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.003068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.003094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.003208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.003235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.003365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.003392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.003504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.003530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.005879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.005905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.006939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.006965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.707 [2024-07-21 08:33:30.007792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.707 qpair failed and we were unable to recover it. 00:37:20.707 [2024-07-21 08:33:30.007929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.007960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.008954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.009869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.009991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.010892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.010929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.708 qpair failed and we were unable to recover it. 00:37:20.708 [2024-07-21 08:33:30.011772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.708 [2024-07-21 08:33:30.011797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.011899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.011934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.012953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.012979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.013958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.013998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.014934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.014961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.015939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.015965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.016062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.709 [2024-07-21 08:33:30.016088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.709 qpair failed and we were unable to recover it. 00:37:20.709 [2024-07-21 08:33:30.016177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.016299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.016451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.016608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.016746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.016899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.016925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.017887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.017912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.018868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.018894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.710 [2024-07-21 08:33:30.019899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.710 qpair failed and we were unable to recover it. 00:37:20.710 [2024-07-21 08:33:30.019992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.020826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.020852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.021901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.021933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.022866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.022892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.023029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.023055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.023156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.023181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.023273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.023299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.711 [2024-07-21 08:33:30.023398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.711 [2024-07-21 08:33:30.023423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.711 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.023554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.023584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.023721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.023747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.023858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.023884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.023981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.024924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.024952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.025881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.025907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.026838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.026975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.027865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.027979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.028884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.712 [2024-07-21 08:33:30.028910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.712 qpair failed and we were unable to recover it. 00:37:20.712 [2024-07-21 08:33:30.029016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.029907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.029937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.030896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.030923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.031845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.031875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 EAL: No free 2048 kB hugepages reported on node 1 00:37:20.713 [2024-07-21 08:33:30.032337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.032850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.032977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.033900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.033927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.034953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.034980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.713 [2024-07-21 08:33:30.035104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.713 [2024-07-21 08:33:30.035130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.713 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.035294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.035459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.035595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.035740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.035870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.035979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.036823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.036975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.037971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.037997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.038858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.038990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.039919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.039944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.040944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.040970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.714 [2024-07-21 08:33:30.041077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.714 [2024-07-21 08:33:30.041103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.714 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.041208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.041233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.041359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.041385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.041592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.041626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.041731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.041757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.041909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.041934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.042953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.042978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.043993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.044911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.044937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.045935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.046874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.046974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.047000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.047120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.715 [2024-07-21 08:33:30.047145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.715 qpair failed and we were unable to recover it. 00:37:20.715 [2024-07-21 08:33:30.047272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.047426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.047546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.047708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.047833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.047959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.047989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.048946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.048971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.049932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.049958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.050945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.050971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.716 [2024-07-21 08:33:30.051856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.716 [2024-07-21 08:33:30.051896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.716 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.052862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.052888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.053847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.053977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.054859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.055897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.055932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.056060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.056087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.056199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.056224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.056332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.056358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.717 qpair failed and we were unable to recover it. 00:37:20.717 [2024-07-21 08:33:30.056459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.717 [2024-07-21 08:33:30.056485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.056636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.056676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.056788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.056815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.056955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.056980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.057961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.057987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.058953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.058979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.059923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.059949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.060846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.718 [2024-07-21 08:33:30.060890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.718 qpair failed and we were unable to recover it. 00:37:20.718 [2024-07-21 08:33:30.061022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.061231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.061377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.061536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.061704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.061856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.061888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.062846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.062994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.063938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.063987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.064924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.064950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.065859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.065885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.066004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.066030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.719 [2024-07-21 08:33:30.066165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.719 [2024-07-21 08:33:30.066191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.719 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.066291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.066317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.066420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.066446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.066547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.066573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.066742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.066769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.066868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.066896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.067944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.067974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:20.720 [2024-07-21 08:33:30.068423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.068965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.068991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.069934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.069960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.070849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.070875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.071025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.071064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.071165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.071193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.720 [2024-07-21 08:33:30.071303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.720 [2024-07-21 08:33:30.071329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.720 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.071432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.071458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.071567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.071593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.071657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b610 (9): Bad file descriptor 00:37:20.721 [2024-07-21 08:33:30.071806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.071845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.071992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.072971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.072997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.073871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.073897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.074909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.721 [2024-07-21 08:33:30.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.721 qpair failed and we were unable to recover it. 00:37:20.721 [2024-07-21 08:33:30.075062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.075944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.075970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.076900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.076926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.077859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.077976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.078951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.078978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.079109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.079135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.079266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.079294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.079437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.079463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.079607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.722 [2024-07-21 08:33:30.079654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.722 qpair failed and we were unable to recover it. 00:37:20.722 [2024-07-21 08:33:30.079790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.079817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.079923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.079955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.080904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.080930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.081861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.081886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.082836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.082982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.083904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.083929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.723 [2024-07-21 08:33:30.084737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.723 [2024-07-21 08:33:30.084764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.723 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.084876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.084902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.085906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.085932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.086880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.086906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.087944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.087970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.088844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.088979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.089888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.089914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.724 qpair failed and we were unable to recover it. 00:37:20.724 [2024-07-21 08:33:30.090037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.724 [2024-07-21 08:33:30.090062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.090869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.090895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.091928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.091954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.092893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.092919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.093958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.093984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.094956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.094982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.095864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.095897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.096036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.725 [2024-07-21 08:33:30.096062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.725 qpair failed and we were unable to recover it. 00:37:20.725 [2024-07-21 08:33:30.096215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.096241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.096348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.096374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.096512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.096538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.096697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.096724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.096858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.096883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.096979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.097888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.097914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.098955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.098982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.099909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.099935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.100914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.100940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.101073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.101098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.726 [2024-07-21 08:33:30.101302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.726 [2024-07-21 08:33:30.101328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.726 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.101434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.101460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.101564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.101591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.101726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.101752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.101867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.101892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.102877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.103895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.103922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.104955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.104981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.105879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.105987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.106848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.106874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.107000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.107026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.727 [2024-07-21 08:33:30.107126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.727 [2024-07-21 08:33:30.107152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.727 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.107305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.107330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.107451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.107477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.107607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.107640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.107768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.107794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.107901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.107933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.108811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.108838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.109913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.109939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.110950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.110976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.111080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.111105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.728 [2024-07-21 08:33:30.111231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.728 [2024-07-21 08:33:30.111258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.728 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.111373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.111412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.111545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.111572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.111696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.111724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.111853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.111879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.111986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.112963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.112989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.113944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.113970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.114871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.115062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.115088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.115189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.115215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.115371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.729 [2024-07-21 08:33:30.115397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.729 qpair failed and we were unable to recover it. 00:37:20.729 [2024-07-21 08:33:30.115502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.115528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.115682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.115709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.115816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.115842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.115948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.115974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.116880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.116907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.117885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.117923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.118919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.118946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.119082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.119108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.119208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.119235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.119373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.119406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.730 qpair failed and we were unable to recover it. 00:37:20.730 [2024-07-21 08:33:30.119545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.730 [2024-07-21 08:33:30.119571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.119718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.119744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.119840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.119867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.119979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.120886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.120985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.121011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.121112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.121138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.121277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.121302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.121426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.121452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.121929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.121958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.122951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.122984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.731 qpair failed and we were unable to recover it. 00:37:20.731 [2024-07-21 08:33:30.123878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.731 [2024-07-21 08:33:30.123909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.124941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.124970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.125960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.125994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.126109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.126135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.126237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.126263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.126397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.126423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.732 [2024-07-21 08:33:30.126521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.732 [2024-07-21 08:33:30.126550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.732 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.126679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.126708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.126823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.126850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.126988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.127833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.127982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.128932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.128959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.129853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.129879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.130052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.130085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.733 [2024-07-21 08:33:30.130188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.733 [2024-07-21 08:33:30.130215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.733 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.130312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.130337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.130467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.130494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.130604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.130642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.130771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.130797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.130897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.130923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.131846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.131989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.132928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.132954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.133928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.133955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.134078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.134105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.734 [2024-07-21 08:33:30.134231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.734 [2024-07-21 08:33:30.134257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.734 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.134356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.134383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.134518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.134545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.134663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.134703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.134812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.134840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.134982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.135958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.135986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.136845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.136889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.137874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.137901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.138056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.138083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.138180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.138206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.735 [2024-07-21 08:33:30.138309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.735 [2024-07-21 08:33:30.138339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.735 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.138474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.138501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.138656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.138802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.138829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.138943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.138969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.139839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.139996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.140957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.140989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.141842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.141869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.142012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.142038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.142151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.142178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.142319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.736 [2024-07-21 08:33:30.142357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.736 qpair failed and we were unable to recover it. 00:37:20.736 [2024-07-21 08:33:30.142485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.142684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.142723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.142833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.142860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.143920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.143948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.144879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.144907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.145961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.145989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.146876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.146983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.147137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.147268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.147394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.147518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.737 qpair failed and we were unable to recover it. 00:37:20.737 [2024-07-21 08:33:30.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.737 [2024-07-21 08:33:30.147742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.147846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.147873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.148883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.148911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.149921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.149947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.150896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.150922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.151021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.151047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.151181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.738 [2024-07-21 08:33:30.151210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.738 qpair failed and we were unable to recover it. 00:37:20.738 [2024-07-21 08:33:30.151363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.151390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.151527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.151566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.151710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.151739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.151852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.151877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.151984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.152848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.152987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.153882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.153980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.154906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.154931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.155943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.739 [2024-07-21 08:33:30.156883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.739 [2024-07-21 08:33:30.156909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.739 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.157951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.157977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.158850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.158998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.159122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.159301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.159453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.159606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.159854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.159882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.160954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.161091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.161116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.161214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.161242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.161348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.161381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.161480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.161507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.740 [2024-07-21 08:33:30.161644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.740 [2024-07-21 08:33:30.161671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.740 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.161758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:20.741 [2024-07-21 08:33:30.161769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.161791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:20.741 [2024-07-21 08:33:30.161795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 [2024-07-21 08:33:30.161806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.161818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:20.741 [2024-07-21 08:33:30.161829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:20.741 [2024-07-21 08:33:30.161895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.161884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:20.741 [2024-07-21 08:33:30.161923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:20.741 [2024-07-21 08:33:30.162019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:20.741 [2024-07-21 08:33:30.162022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.161979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:20.741 [2024-07-21 08:33:30.162203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.162329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.162460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.162595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.162764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.162937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.162971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.163958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.163986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.164889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.164996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.165924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.165950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.166885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.166987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.741 [2024-07-21 08:33:30.167014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.741 qpair failed and we were unable to recover it. 00:37:20.741 [2024-07-21 08:33:30.167120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.167244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.167399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.167534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.167721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.167891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.167932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.168857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.168981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.169876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.169975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.170847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.170976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.171840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.171867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.172917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.172943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.173045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.173071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.173172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.742 [2024-07-21 08:33:30.173199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.742 qpair failed and we were unable to recover it. 00:37:20.742 [2024-07-21 08:33:30.173300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.173327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.173449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.173489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.173622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.173651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.173757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.173784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.173889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.173922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.174885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.174915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.175751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.175982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.176874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.176974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.177957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.177983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.178115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.178142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.178242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.178268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.743 [2024-07-21 08:33:30.178373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.743 [2024-07-21 08:33:30.178401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.743 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.178524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.178564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.178738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.178779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.178890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.178918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.179887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.179920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.180872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.180908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.181999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.182910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.182939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.183842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.183973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.184000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.184108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.184134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.184236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.184264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.184401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.184430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.184554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.744 [2024-07-21 08:33:30.184594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.744 qpair failed and we were unable to recover it. 00:37:20.744 [2024-07-21 08:33:30.184724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.184856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.184882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.184992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.185861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.185889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.186883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.186990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.187865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.187893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.188972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.188999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.189883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.189920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.190953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.190980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.191112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.191139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.191260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.191286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.191415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.191442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.745 qpair failed and we were unable to recover it. 00:37:20.745 [2024-07-21 08:33:30.191577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.745 [2024-07-21 08:33:30.191624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.191765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.191792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.191923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.191957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.192896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.192934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.193916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.193944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.194937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.194965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.195877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.195975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.196965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.196991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.197930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.197957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.198077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.198240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.198280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.198410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.198438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.198543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.746 [2024-07-21 08:33:30.198570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.746 qpair failed and we were unable to recover it. 00:37:20.746 [2024-07-21 08:33:30.198686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.198815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.198842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.198975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.199914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.199941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.200960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.200987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.201897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.201925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.202925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.202952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.203162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.747 [2024-07-21 08:33:30.203188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.747 qpair failed and we were unable to recover it. 00:37:20.747 [2024-07-21 08:33:30.203284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.203312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.203441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.203469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.203584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.203637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.203768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.203808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.203948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.203987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.204854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.204883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.205882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.205923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.206953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.206979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.207954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.207980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.208130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.208377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.208511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.208702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.208874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.208975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.748 [2024-07-21 08:33:30.209002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.748 qpair failed and we were unable to recover it. 00:37:20.748 [2024-07-21 08:33:30.209125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.209952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.209978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.210900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.210940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.211841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.211973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.212884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.212994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.213865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.213893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.214970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.749 [2024-07-21 08:33:30.214996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.749 qpair failed and we were unable to recover it. 00:37:20.749 [2024-07-21 08:33:30.215148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.215312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.215469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.215620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.215777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.215908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.215934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.216866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.216999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.217877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.217910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.218864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.218891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.219839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.219975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.220972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.220998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.750 [2024-07-21 08:33:30.221116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.750 [2024-07-21 08:33:30.221142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.750 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.221970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.221998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.222864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.222988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.223881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.223908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.224869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.224997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.751 [2024-07-21 08:33:30.225796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.751 qpair failed and we were unable to recover it. 00:37:20.751 [2024-07-21 08:33:30.225952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.225981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.226935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.226962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.227882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.227988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.228940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.228966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.229920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.229970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.230877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.752 [2024-07-21 08:33:30.231747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.752 qpair failed and we were unable to recover it. 00:37:20.752 [2024-07-21 08:33:30.231858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.231885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.231987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.232914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.232941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.233961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.233988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.234909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.234935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.235832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.235861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.236882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.236978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.237129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.237292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.237437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.237658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.753 qpair failed and we were unable to recover it. 00:37:20.753 [2024-07-21 08:33:30.237795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.753 [2024-07-21 08:33:30.237823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.237938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.237965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.238957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.238983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.239850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.239891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.240939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.240966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.241843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.241978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.242867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.242980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.243008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.243119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.243145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.754 qpair failed and we were unable to recover it. 00:37:20.754 [2024-07-21 08:33:30.243273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.754 [2024-07-21 08:33:30.243301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.243445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.243485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.243611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.243657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.243798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.243826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.243928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.243955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.244949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.244975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.245902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.245928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.246923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.246952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.247888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.247995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.248883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.248984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.249011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.249136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.249162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.249269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.755 [2024-07-21 08:33:30.249295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.755 qpair failed and we were unable to recover it. 00:37:20.755 [2024-07-21 08:33:30.249393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.249419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.249517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.249544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.249672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.249702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.249820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.249861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.249970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.249998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.250872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.250977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.251959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.251986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.252853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.252879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.756 [2024-07-21 08:33:30.253665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.756 [2024-07-21 08:33:30.253694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.756 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.253851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.253877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.254926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.254954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.255918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.255945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.256917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.256944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.257961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.257988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.258887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.258991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.259017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.259144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.259171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.757 qpair failed and we were unable to recover it. 00:37:20.757 [2024-07-21 08:33:30.259304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.757 [2024-07-21 08:33:30.259331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.259438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.259465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.259575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.259623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.259750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.259779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.259913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.259940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.260872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.260981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.261960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.261987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.262890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.262995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.263849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.263978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.758 [2024-07-21 08:33:30.264738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.758 qpair failed and we were unable to recover it. 00:37:20.758 [2024-07-21 08:33:30.264864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.264891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.264990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.265879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.265990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.266927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.266955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.267900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.267928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.268930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.268957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.269778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.269990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.759 [2024-07-21 08:33:30.270018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.759 qpair failed and we were unable to recover it. 00:37:20.759 [2024-07-21 08:33:30.270149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.270864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.270993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.271908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.271936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.272863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.272990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.273861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.273889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.760 [2024-07-21 08:33:30.274840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.760 qpair failed and we were unable to recover it. 00:37:20.760 [2024-07-21 08:33:30.274984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.275909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.275945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.276933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.276959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.277869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.277895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.278858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.278976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.279877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.279976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.761 [2024-07-21 08:33:30.280881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.761 [2024-07-21 08:33:30.280909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.761 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.281845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.281872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.282887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.282996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.283898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.283929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.284880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.284986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.285861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.285887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.286002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.286029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 [2024-07-21 08:33:30.286138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.286164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:20.762 [2024-07-21 08:33:30.286258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.286285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:20.762 [2024-07-21 08:33:30.286397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.286423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.762 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:20.762 [2024-07-21 08:33:30.286527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.762 [2024-07-21 08:33:30.286568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.762 qpair failed and we were unable to recover it. 00:37:20.763 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:20.763 [2024-07-21 08:33:30.286677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.286705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:20.763 [2024-07-21 08:33:30.286826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.286853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.286957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.287930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.287955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.288920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.288946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.289928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.289954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.290909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.290936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.291950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.291977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.292082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.763 [2024-07-21 08:33:30.292111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.763 qpair failed and we were unable to recover it. 00:37:20.763 [2024-07-21 08:33:30.292258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.292407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.292530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.292668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.292803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.292956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.292983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.293877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.293982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.294892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.294992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.295882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.295984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.296892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.296917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.297047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.297170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.297329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.297471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.764 [2024-07-21 08:33:30.297592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.764 qpair failed and we were unable to recover it. 00:37:20.764 [2024-07-21 08:33:30.297694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.765 [2024-07-21 08:33:30.297720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.765 qpair failed and we were unable to recover it. 00:37:20.765 [2024-07-21 08:33:30.297823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.765 [2024-07-21 08:33:30.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.765 qpair failed and we were unable to recover it. 00:37:20.765 [2024-07-21 08:33:30.297941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.765 [2024-07-21 08:33:30.297967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.765 qpair failed and we were unable to recover it. 00:37:20.765 [2024-07-21 08:33:30.298085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.765 [2024-07-21 08:33:30.298111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:20.765 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.298888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.298914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.299012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.299039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.299135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.299161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.299266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.299293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.299390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-07-21 08:33:30.299415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-07-21 08:33:30.299508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.299533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.300310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.300341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.300456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.300483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.300588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.300623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.300729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.300755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.300859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.300884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.301958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.301984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.302883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.302910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.303957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.303984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.035 [2024-07-21 08:33:30.304090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.304117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:21.035 [2024-07-21 08:33:30.304213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.304240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.035 [2024-07-21 08:33:30.304359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.304385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.035 [2024-07-21 08:33:30.304491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-07-21 08:33:30.304519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-07-21 08:33:30.304660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.304686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.304785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.304812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.304946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.304973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.305966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.305992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.306854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.306881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.307907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.307942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.308860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.308994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.309855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.309880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.310007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.310033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.310137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.310164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.310293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.310434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.310460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-07-21 08:33:30.310566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-07-21 08:33:30.310592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.310699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.310725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.310825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.310854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.310968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.310995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.311841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.311988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.312967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.312992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.313938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.313969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-07-21 08:33:30.314770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-07-21 08:33:30.314796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.314902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.314934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.315949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.315975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.316845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.316974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.317910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.317942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.318912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.318940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.319968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.319996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.320104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.320129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.320244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.320272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.320401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.320427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.320527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-07-21 08:33:30.320553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-07-21 08:33:30.320671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.320698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.320812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.320837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.320943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.320969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.321893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.321928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.322901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.322939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.323848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.323982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.324859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.324978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.325005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.325138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.325165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.325263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.325289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.325427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.325453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-07-21 08:33:30.325550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-07-21 08:33:30.325577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.325683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.325709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.325812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.325838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.325946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.325973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.326919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.326958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.327070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.327202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.327330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 Malloc0 00:37:21.040 [2024-07-21 08:33:30.327458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.327591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.327732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.040 [2024-07-21 08:33:30.327870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.327896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:21.040 [2024-07-21 08:33:30.328019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.040 [2024-07-21 08:33:30.328190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.040 [2024-07-21 08:33:30.328217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.328318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.328451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.328624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.328757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.328890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.328917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.329862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.329974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-07-21 08:33:30.330848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-07-21 08:33:30.330874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.330983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.041 [2024-07-21 08:33:30.331260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.331878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.331904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.332965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.332991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.333913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.333941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.334882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.334917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64d560 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.335944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.335979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.336089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.336117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.336220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.336248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.336353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.336385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.336554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-07-21 08:33:30.336579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-07-21 08:33:30.336691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.336816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.336842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.336937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.336962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7e4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.337922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.337950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.338960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.338987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.339087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.339217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.339347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.042 [2024-07-21 08:33:30.339487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:21.042 [2024-07-21 08:33:30.339603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.042 [2024-07-21 08:33:30.339740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.042 [2024-07-21 08:33:30.339875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.339902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.340868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.340894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.341869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.341895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.342030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-07-21 08:33:30.342056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-07-21 08:33:30.342152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.342853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.342980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.343960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.343990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.344887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.344913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.345891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.345918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-07-21 08:33:30.346719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-07-21 08:33:30.346746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.346851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.346876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.346976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.347008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.347144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.347291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.044 [2024-07-21 08:33:30.347417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.347542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.044 [2024-07-21 08:33:30.347694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.044 [2024-07-21 08:33:30.347829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.347973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.347999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.348893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.348919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.349883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.349909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.350871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.350898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.351062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.351209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.351353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.351490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-07-21 08:33:30.351659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-07-21 08:33:30.351767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.351793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.351914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.351940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.352874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.352901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.353954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.353980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.354843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.354978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.355108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.355230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.355357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.045 [2024-07-21 08:33:30.355491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.355625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:21.045 [2024-07-21 08:33:30.355651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.355754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.045 [2024-07-21 08:33:30.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.045 [2024-07-21 08:33:30.355888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.355915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.356936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.356962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-07-21 08:33:30.357058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-07-21 08:33:30.357084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.357898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.357925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7d4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.358897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.358923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.359034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.359061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.359193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-07-21 08:33:30.359219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7dc000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.359578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.046 [2024-07-21 08:33:30.361876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.362014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.362042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.362067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.362085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.362132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.046 08:33:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 84608 00:37:21.046 [2024-07-21 08:33:30.371715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.371861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.371888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.371911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.371925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.371954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.381752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.381854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.381882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.381897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.381910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.381939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.391770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.391877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.391904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.391919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.391932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.391961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.401763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.401870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.401901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.401917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.401930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.401959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.411772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.411869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.411902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.411917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.411930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.411960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.421800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.421903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.421930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.421945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.421958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.421988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.431816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.046 [2024-07-21 08:33:30.431927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.046 [2024-07-21 08:33:30.431953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.046 [2024-07-21 08:33:30.431967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.046 [2024-07-21 08:33:30.431980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.046 [2024-07-21 08:33:30.432010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-07-21 08:33:30.441827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.441933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.441963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.441978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.441996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.442027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.451849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.451949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.451975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.451989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.452002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.452031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.461857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.461980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.462007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.462021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.462034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.462063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.471886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.471993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.472019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.472034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.472047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.472076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.481940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.482042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.482069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.482083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.482096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.482126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.491991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.492101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.492127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.492142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.492155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.492184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.502083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.502182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.502208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.502223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.502236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.502267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.511998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.512098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.512123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.512138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.512151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.512181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.522113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.522259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.522285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.522299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.522312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.522341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.532071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.532168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.532193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.532207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.532226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.532256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.542122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.542219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.542246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.542260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.542273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.542315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.552120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.552226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.552253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.552267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.552280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.552311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.562192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.562291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.562317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.562331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.562344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.562374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.572212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.572317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.572343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.572357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.047 [2024-07-21 08:33:30.572370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.047 [2024-07-21 08:33:30.572401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-07-21 08:33:30.582270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.047 [2024-07-21 08:33:30.582374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.047 [2024-07-21 08:33:30.582401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.047 [2024-07-21 08:33:30.582416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.582429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.582458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.592295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.592450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.592476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.592490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.592503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.592532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.602325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.602433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.602459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.602473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.602487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.602516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.612330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.612449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.612475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.612490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.612503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.612532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.622331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.622431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.622457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.622478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.622492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.622523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.632357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.632486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.632512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.632527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.632540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.632569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.642426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.642520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.642546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.642560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.642572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.642602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-07-21 08:33:30.652431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.048 [2024-07-21 08:33:30.652533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.048 [2024-07-21 08:33:30.652560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.048 [2024-07-21 08:33:30.652574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.048 [2024-07-21 08:33:30.652587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.048 [2024-07-21 08:33:30.652624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.309 [2024-07-21 08:33:30.662477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.309 [2024-07-21 08:33:30.662575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.309 [2024-07-21 08:33:30.662602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.309 [2024-07-21 08:33:30.662624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.309 [2024-07-21 08:33:30.662642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.309 [2024-07-21 08:33:30.662672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.309 qpair failed and we were unable to recover it. 00:37:21.309 [2024-07-21 08:33:30.672565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.309 [2024-07-21 08:33:30.672678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.309 [2024-07-21 08:33:30.672705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.309 [2024-07-21 08:33:30.672720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.309 [2024-07-21 08:33:30.672733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.309 [2024-07-21 08:33:30.672775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.309 qpair failed and we were unable to recover it. 00:37:21.309 [2024-07-21 08:33:30.682503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.682607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.682641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.682656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.682669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.682699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.692529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.692662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.692688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.692702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.692717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.692747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.702558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.702661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.702687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.702702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.702715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.702744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.712631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.712733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.712763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.712779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.712792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.712821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.722682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.722782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.722807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.722821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.722834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.722864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.732629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.732750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.732778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.732792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.732806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.732836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.742649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.742753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.742779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.742793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.742806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.742837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.752677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.752781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.752807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.752821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.752834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.752871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.762736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.762848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.762874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.762889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.762902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.762933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.772785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.772888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.772914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.772928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.772941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.772971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.782793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.782884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.782909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.782923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.782938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.782968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.792832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.792948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.792974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.792989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.793002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.793032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.802832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.802957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.802987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.803003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.803016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.803045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.812862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.310 [2024-07-21 08:33:30.812981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.310 [2024-07-21 08:33:30.813007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.310 [2024-07-21 08:33:30.813022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.310 [2024-07-21 08:33:30.813035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.310 [2024-07-21 08:33:30.813077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.310 qpair failed and we were unable to recover it. 00:37:21.310 [2024-07-21 08:33:30.822898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.822996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.823022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.823036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.823050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.823079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.832904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.833020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.833046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.833060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.833073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.833103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.842975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.843074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.843100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.843114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.843127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.843162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.852963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.853064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.853090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.853105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.853118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.853147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.863004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.863104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.863130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.863144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.863157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.863199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.873023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.873130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.873156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.873170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.873183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.873212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.883078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.883181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.883208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.883222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.883235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.883264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.893104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.893217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.893243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.893257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.893271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.893301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.903136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.903250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.903275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.903289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.903302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.903331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.913145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.913251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.913276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.913290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.913303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.913333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.923212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.923331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.923360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.923374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.923387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.923417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.311 [2024-07-21 08:33:30.933182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.311 [2024-07-21 08:33:30.933276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.311 [2024-07-21 08:33:30.933302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.311 [2024-07-21 08:33:30.933316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.311 [2024-07-21 08:33:30.933334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.311 [2024-07-21 08:33:30.933365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.311 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.943232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.943361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.943388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.943402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.943415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.943447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.953265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.953363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.953389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.953403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.953416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.953445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.963327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.963438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.963464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.963479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.963491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.963520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.973308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.973407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.973433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.973447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.973460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.973490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.983333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.983428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.983454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.983469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.983482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.983511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:30.993378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:30.993481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.571 [2024-07-21 08:33:30.993507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.571 [2024-07-21 08:33:30.993521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.571 [2024-07-21 08:33:30.993534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.571 [2024-07-21 08:33:30.993563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.571 qpair failed and we were unable to recover it. 00:37:21.571 [2024-07-21 08:33:31.003378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.571 [2024-07-21 08:33:31.003476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.003501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.003515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.003527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.003555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.013520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.013625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.013652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.013666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.013680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.013711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.023422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.023515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.023541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.023561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.023576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.023605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.033525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.033636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.033662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.033677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.033692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.033722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.043506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.043603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.043638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.043653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.043666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.043714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.053504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.053624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.053651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.053666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.053679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.053708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.063563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.063672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.063697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.063712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.063725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.063754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.073644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.073798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.073823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.073838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.073851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.073880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.083603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.083710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.083735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.083750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.083763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.083792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.093637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.093733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.093759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.093773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.093786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.093815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.103654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.103748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.103773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.103787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.103800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.103829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.113693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.113839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.113872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.113888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.113901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.113930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.123748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.123852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.123878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.123892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.123905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.123934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.133794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.133889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.133915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.133929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.572 [2024-07-21 08:33:31.133942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.572 [2024-07-21 08:33:31.133973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.572 qpair failed and we were unable to recover it. 00:37:21.572 [2024-07-21 08:33:31.143820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.572 [2024-07-21 08:33:31.143917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.572 [2024-07-21 08:33:31.143942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.572 [2024-07-21 08:33:31.143957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.143970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.144000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.573 [2024-07-21 08:33:31.153904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.573 [2024-07-21 08:33:31.154011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.573 [2024-07-21 08:33:31.154037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.573 [2024-07-21 08:33:31.154051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.154064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.154094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.573 [2024-07-21 08:33:31.163832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.573 [2024-07-21 08:33:31.163933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.573 [2024-07-21 08:33:31.163959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.573 [2024-07-21 08:33:31.163973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.163986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.164015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.573 [2024-07-21 08:33:31.174034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.573 [2024-07-21 08:33:31.174142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.573 [2024-07-21 08:33:31.174168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.573 [2024-07-21 08:33:31.174181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.174195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.174224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.573 [2024-07-21 08:33:31.183928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.573 [2024-07-21 08:33:31.184029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.573 [2024-07-21 08:33:31.184054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.573 [2024-07-21 08:33:31.184069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.184082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.184112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.573 [2024-07-21 08:33:31.193998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.573 [2024-07-21 08:33:31.194108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.573 [2024-07-21 08:33:31.194133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.573 [2024-07-21 08:33:31.194147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.573 [2024-07-21 08:33:31.194161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.573 [2024-07-21 08:33:31.194190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.573 qpair failed and we were unable to recover it. 00:37:21.832 [2024-07-21 08:33:31.204023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.832 [2024-07-21 08:33:31.204150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.832 [2024-07-21 08:33:31.204181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.832 [2024-07-21 08:33:31.204197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.832 [2024-07-21 08:33:31.204210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.832 [2024-07-21 08:33:31.204240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.832 qpair failed and we were unable to recover it. 00:37:21.832 [2024-07-21 08:33:31.214070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.832 [2024-07-21 08:33:31.214166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.832 [2024-07-21 08:33:31.214192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.832 [2024-07-21 08:33:31.214207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.832 [2024-07-21 08:33:31.214220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.214249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.224005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.224099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.224126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.224140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.224153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.224196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.234074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.234183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.234211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.234227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.234241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.234282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.244091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.244196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.244225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.244240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.244253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.244289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.254087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.254189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.254218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.254232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.254246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.254275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.264144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.264249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.264276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.264290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.264303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.264334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.274157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.274265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.274290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.274305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.274318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.274348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.284243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.284342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.284368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.284382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.284396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.284425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.294199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.294305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.294336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.294352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.294365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.294394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.304271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.304372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.304398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.304412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.304426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.304455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.314275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.314376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.314403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.314417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.314430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.314459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.324319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.324418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.324443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.324458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.324472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.324500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.334323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.334429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.334457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.334472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.334490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.334522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.344379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.344479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.344519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.344534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.344548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.344590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.833 [2024-07-21 08:33:31.354385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.833 [2024-07-21 08:33:31.354527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.833 [2024-07-21 08:33:31.354553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.833 [2024-07-21 08:33:31.354568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.833 [2024-07-21 08:33:31.354581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.833 [2024-07-21 08:33:31.354619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.833 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.364432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.364529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.364555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.364569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.364582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.364631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.374439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.374536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.374562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.374576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.374590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.374626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.384475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.384587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.384628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.384647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.384661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.384704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.394484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.394588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.394622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.394639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.394653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.394683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.404545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.404648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.404674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.404688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.404701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.404732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.414542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.414649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.414675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.414689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.414704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.414733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.424577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.424684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.424713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.424734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.424749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.424780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.434623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.434727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.434753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.434768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.434782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.434811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.444666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.444767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.444793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.444807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.444820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.444852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:21.834 [2024-07-21 08:33:31.454685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.834 [2024-07-21 08:33:31.454780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.834 [2024-07-21 08:33:31.454806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.834 [2024-07-21 08:33:31.454821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.834 [2024-07-21 08:33:31.454834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:21.834 [2024-07-21 08:33:31.454863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:21.834 qpair failed and we were unable to recover it. 00:37:22.093 [2024-07-21 08:33:31.464721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.093 [2024-07-21 08:33:31.464837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.093 [2024-07-21 08:33:31.464864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.093 [2024-07-21 08:33:31.464878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.093 [2024-07-21 08:33:31.464891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.093 [2024-07-21 08:33:31.464922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-07-21 08:33:31.474761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.093 [2024-07-21 08:33:31.474874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.093 [2024-07-21 08:33:31.474901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.093 [2024-07-21 08:33:31.474916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.093 [2024-07-21 08:33:31.474930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.093 [2024-07-21 08:33:31.474966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-07-21 08:33:31.484746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.093 [2024-07-21 08:33:31.484866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.093 [2024-07-21 08:33:31.484893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.093 [2024-07-21 08:33:31.484908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.093 [2024-07-21 08:33:31.484920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.093 [2024-07-21 08:33:31.484951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-07-21 08:33:31.494812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.093 [2024-07-21 08:33:31.494911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.093 [2024-07-21 08:33:31.494938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.093 [2024-07-21 08:33:31.494952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.093 [2024-07-21 08:33:31.494965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.093 [2024-07-21 08:33:31.494996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.093 qpair failed and we were unable to recover it. 00:37:22.093 [2024-07-21 08:33:31.504877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.504975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.505001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.505016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.505029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.505059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.514827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.514947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.514972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.514993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.515007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.515037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.524884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.524995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.525021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.525035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.525048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.525078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.534889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.534987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.535014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.535029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.535043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.535072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.544915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.545008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.545033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.545048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.545061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.545090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.554943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.555057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.555084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.555099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.555116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.555149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.564961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.565061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.565088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.565102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.565115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.565144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.575045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.575143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.575169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.575184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.575196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.575227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.585037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.585143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.585169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.585184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.585197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.585240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.595120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.595261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.595287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.595302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.595315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.595344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.605092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.605193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.605224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.605239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.605253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.605282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.615114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.615206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.615232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.615246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.615259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.615289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.625128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.625218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.625244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.625258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.625271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.094 [2024-07-21 08:33:31.625301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.094 qpair failed and we were unable to recover it. 00:37:22.094 [2024-07-21 08:33:31.635215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.094 [2024-07-21 08:33:31.635327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.094 [2024-07-21 08:33:31.635353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.094 [2024-07-21 08:33:31.635367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.094 [2024-07-21 08:33:31.635380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.635409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.645238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.645339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.645365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.645380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.645393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.645428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.655253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.655394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.655419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.655434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.655447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.655489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.665276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.665377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.665403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.665417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.665430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.665461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.675287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.675393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.675419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.675434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.675447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.675476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.685312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.685413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.685438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.685452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.685465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.685494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.695357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.695473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.695504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.695519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.695533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.695575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.705358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.705459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.705485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.705500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.705513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.705543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.095 [2024-07-21 08:33:31.715409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.095 [2024-07-21 08:33:31.715511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.095 [2024-07-21 08:33:31.715537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.095 [2024-07-21 08:33:31.715551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.095 [2024-07-21 08:33:31.715564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.095 [2024-07-21 08:33:31.715594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.095 qpair failed and we were unable to recover it. 00:37:22.354 [2024-07-21 08:33:31.725459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.354 [2024-07-21 08:33:31.725575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.354 [2024-07-21 08:33:31.725602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.354 [2024-07-21 08:33:31.725625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.354 [2024-07-21 08:33:31.725640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.354 [2024-07-21 08:33:31.725684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.354 qpair failed and we were unable to recover it. 00:37:22.354 [2024-07-21 08:33:31.735508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.354 [2024-07-21 08:33:31.735606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.354 [2024-07-21 08:33:31.735642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.354 [2024-07-21 08:33:31.735657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.354 [2024-07-21 08:33:31.735678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.354 [2024-07-21 08:33:31.735709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.354 qpair failed and we were unable to recover it. 00:37:22.354 [2024-07-21 08:33:31.745478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.354 [2024-07-21 08:33:31.745576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.354 [2024-07-21 08:33:31.745602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.354 [2024-07-21 08:33:31.745624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.354 [2024-07-21 08:33:31.745640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.354 [2024-07-21 08:33:31.745670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.354 qpair failed and we were unable to recover it. 00:37:22.354 [2024-07-21 08:33:31.755546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.354 [2024-07-21 08:33:31.755668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.354 [2024-07-21 08:33:31.755694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.755708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.755721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.755751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.765536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.765647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.765672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.765687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.765702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.765732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.775577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.775690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.775716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.775730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.775743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.775773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.785579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.785692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.785718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.785733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.785745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.785777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.795659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.795761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.795786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.795800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.795813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.795843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.805660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.805760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.805785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.805799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.805812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.805842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.815713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.815827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.815854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.815868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.815881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.815910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.825717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.825824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.825849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.825869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.825883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.825913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.835779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.835890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.835916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.835930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.835943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.835973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.845781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.845889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.845915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.845929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.845943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.845987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.855808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.855909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.855938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.855952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.855965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.855995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.865835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.865932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.865958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.865972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.865988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.866017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.875861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.876002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.876028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.876042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.876055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.876085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.885909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.886058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.886084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.886099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.355 [2024-07-21 08:33:31.886112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.355 [2024-07-21 08:33:31.886141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.355 qpair failed and we were unable to recover it. 00:37:22.355 [2024-07-21 08:33:31.895933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.355 [2024-07-21 08:33:31.896066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.355 [2024-07-21 08:33:31.896092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.355 [2024-07-21 08:33:31.896106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.896120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.896149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.905923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.906054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.906080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.906094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.906107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.906138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.915996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.916139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.916164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.916185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.916200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.916242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.925970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.926065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.926091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.926105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.926118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.926149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.936053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.936163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.936191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.936206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.936222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.936253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.946095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.946205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.946231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.946245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.946258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.946289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.956118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.956232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.956258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.956272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.956285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.956316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.966077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.966175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.966201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.966215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.966227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.966256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.356 [2024-07-21 08:33:31.976122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.356 [2024-07-21 08:33:31.976243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.356 [2024-07-21 08:33:31.976269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.356 [2024-07-21 08:33:31.976283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.356 [2024-07-21 08:33:31.976296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.356 [2024-07-21 08:33:31.976326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.356 qpair failed and we were unable to recover it. 00:37:22.615 [2024-07-21 08:33:31.986147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.615 [2024-07-21 08:33:31.986252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.615 [2024-07-21 08:33:31.986278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.615 [2024-07-21 08:33:31.986292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.615 [2024-07-21 08:33:31.986304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.615 [2024-07-21 08:33:31.986333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.615 qpair failed and we were unable to recover it. 00:37:22.615 [2024-07-21 08:33:31.996213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.615 [2024-07-21 08:33:31.996318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.615 [2024-07-21 08:33:31.996344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.615 [2024-07-21 08:33:31.996358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:31.996371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:31.996400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.006250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.006365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.006394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.006410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.006422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.006451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.016252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.016405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.016431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.016445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.016458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.016487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.026268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.026366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.026392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.026406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.026419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.026448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.036312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.036413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.036439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.036453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.036467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.036509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.046313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.046420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.046446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.046461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.046474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.046510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.056343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.056444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.056470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.056484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.056498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.056527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.066350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.066453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.066479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.066493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.066507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.066535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.076400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.076522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.076547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.076561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.076575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.076604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.086441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.086544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.086569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.086583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.086597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.086635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.096437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.096535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.096565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.096581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.096594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.096631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.106500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.106631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.106657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.106672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.106685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.106715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.116533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.116661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.116687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.116701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.116714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.116743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.126543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.126646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.126672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.126686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.126701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.616 [2024-07-21 08:33:32.126731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.616 qpair failed and we were unable to recover it. 00:37:22.616 [2024-07-21 08:33:32.136573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.616 [2024-07-21 08:33:32.136681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.616 [2024-07-21 08:33:32.136710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.616 [2024-07-21 08:33:32.136725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.616 [2024-07-21 08:33:32.136743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.136774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.146569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.146680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.146706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.146720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.146735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.146766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.156677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.156784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.156809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.156824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.156837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.156867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.166669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.166772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.166798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.166812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.166825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.166854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.176673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.176769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.176795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.176809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.176822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.176851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.186697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.186799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.186825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.186839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.186852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.186882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.196727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.196831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.196856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.196870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.196883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.196913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.206836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.206933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.206962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.206977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.206990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.207031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.216797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.216892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.216918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.216932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.216945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.216975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.226802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.226897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.226923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.226938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.226956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.226986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.617 [2024-07-21 08:33:32.236902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.617 [2024-07-21 08:33:32.237004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.617 [2024-07-21 08:33:32.237030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.617 [2024-07-21 08:33:32.237044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.617 [2024-07-21 08:33:32.237058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.617 [2024-07-21 08:33:32.237088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.617 qpair failed and we were unable to recover it. 00:37:22.876 [2024-07-21 08:33:32.246874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.876 [2024-07-21 08:33:32.246975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.876 [2024-07-21 08:33:32.247002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.876 [2024-07-21 08:33:32.247017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.876 [2024-07-21 08:33:32.247029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.876 [2024-07-21 08:33:32.247059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.876 qpair failed and we were unable to recover it. 00:37:22.876 [2024-07-21 08:33:32.256913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.876 [2024-07-21 08:33:32.257010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.876 [2024-07-21 08:33:32.257036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.876 [2024-07-21 08:33:32.257050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.876 [2024-07-21 08:33:32.257064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.876 [2024-07-21 08:33:32.257093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.876 qpair failed and we were unable to recover it. 00:37:22.876 [2024-07-21 08:33:32.266914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.876 [2024-07-21 08:33:32.267015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.876 [2024-07-21 08:33:32.267041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.876 [2024-07-21 08:33:32.267055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.876 [2024-07-21 08:33:32.267070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.876 [2024-07-21 08:33:32.267100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.876 qpair failed and we were unable to recover it. 00:37:22.876 [2024-07-21 08:33:32.276990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.876 [2024-07-21 08:33:32.277099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.876 [2024-07-21 08:33:32.277128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.876 [2024-07-21 08:33:32.277143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.876 [2024-07-21 08:33:32.277157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.876 [2024-07-21 08:33:32.277187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.876 qpair failed and we were unable to recover it. 00:37:22.876 [2024-07-21 08:33:32.286968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.876 [2024-07-21 08:33:32.287063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.876 [2024-07-21 08:33:32.287090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.876 [2024-07-21 08:33:32.287104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.876 [2024-07-21 08:33:32.287117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.287147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.297007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.297119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.297145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.297159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.297172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.297202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.307057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.307155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.307182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.307196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.307209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.307239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.317071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.317172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.317198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.317218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.317232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.317262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.327126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.327225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.327252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.327266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.327280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.327322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.337102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.337196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.337222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.337236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.337249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.337279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.347146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.347238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.347264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.347278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.347291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.347334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.357247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.357354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.357380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.357394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.357407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.357436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.367218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.367328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.367357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.367372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.367385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.367416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.377233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.377374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.377400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.377414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.377427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.377457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.387290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.387392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.387418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.387432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.387445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.387474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.397293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.397394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.397420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.397434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.397447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.397476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.407349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.407460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.407490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.407505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.407519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.407547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.417352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.417477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.417503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.417518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.417531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.877 [2024-07-21 08:33:32.417560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.877 qpair failed and we were unable to recover it. 00:37:22.877 [2024-07-21 08:33:32.427373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.877 [2024-07-21 08:33:32.427494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.877 [2024-07-21 08:33:32.427521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.877 [2024-07-21 08:33:32.427535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.877 [2024-07-21 08:33:32.427549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.427578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.437415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.437517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.437543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.437557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.437571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.437600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.447459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.447568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.447597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.447611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.447641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.447691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.457466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.457574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.457601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.457623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.457638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.457668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.467509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.467637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.467663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.467678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.467690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.467727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.477546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.477675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.477702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.477716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.477729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.477759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.487563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.487681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.487707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.487721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.487734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.487763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:22.878 [2024-07-21 08:33:32.497587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.878 [2024-07-21 08:33:32.497691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.878 [2024-07-21 08:33:32.497724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.878 [2024-07-21 08:33:32.497740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.878 [2024-07-21 08:33:32.497753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:22.878 [2024-07-21 08:33:32.497796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:22.878 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.507603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.507703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.137 [2024-07-21 08:33:32.507731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.137 [2024-07-21 08:33:32.507745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.137 [2024-07-21 08:33:32.507758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.137 [2024-07-21 08:33:32.507788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.137 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.517674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.517775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.137 [2024-07-21 08:33:32.517802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.137 [2024-07-21 08:33:32.517816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.137 [2024-07-21 08:33:32.517829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.137 [2024-07-21 08:33:32.517858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.137 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.527673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.527778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.137 [2024-07-21 08:33:32.527804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.137 [2024-07-21 08:33:32.527818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.137 [2024-07-21 08:33:32.527832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.137 [2024-07-21 08:33:32.527861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.137 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.537695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.537834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.137 [2024-07-21 08:33:32.537861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.137 [2024-07-21 08:33:32.537875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.137 [2024-07-21 08:33:32.537889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.137 [2024-07-21 08:33:32.537926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.137 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.547739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.547834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.137 [2024-07-21 08:33:32.547859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.137 [2024-07-21 08:33:32.547874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.137 [2024-07-21 08:33:32.547887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.137 [2024-07-21 08:33:32.547917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.137 qpair failed and we were unable to recover it. 00:37:23.137 [2024-07-21 08:33:32.557780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.137 [2024-07-21 08:33:32.557903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.557929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.557944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.557957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.557988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.567813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.567942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.567989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.568016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.568040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.568087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.577822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.577915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.577942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.577957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.577970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.578000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.587914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.588033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.588060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.588074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.588087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.588129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.597879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.597987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.598014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.598028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.598042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.598071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.607946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.608054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.608081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.608095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.608109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.608140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.617951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.618056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.618083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.618097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.618111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.618154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.628017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.628143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.628169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.628183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.628202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.628233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.638036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.638145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.638170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.638185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.638198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.638228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.648061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.648164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.648189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.648204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.648217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.648246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.658089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.658222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.658248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.658262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.658275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.658305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.668080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.668172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.668198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.668212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.668225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.668254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.678150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.678247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.678272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.678287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.678300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.678330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.688173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.688278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.688304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.688318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.138 [2024-07-21 08:33:32.688331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.138 [2024-07-21 08:33:32.688361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.138 qpair failed and we were unable to recover it. 00:37:23.138 [2024-07-21 08:33:32.698189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.138 [2024-07-21 08:33:32.698280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.138 [2024-07-21 08:33:32.698306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.138 [2024-07-21 08:33:32.698320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.698333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.698375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.708179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.708272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.708298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.708312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.708325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.708355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.718239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.718375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.718400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.718420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.718435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.718465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.728238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.728377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.728403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.728417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.728430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.728459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.738292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.738400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.738425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.738440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.738453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.738482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.748287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.748389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.748414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.748428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.748441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.748472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.139 [2024-07-21 08:33:32.758328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.139 [2024-07-21 08:33:32.758429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.139 [2024-07-21 08:33:32.758454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.139 [2024-07-21 08:33:32.758468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.139 [2024-07-21 08:33:32.758481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.139 [2024-07-21 08:33:32.758511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.139 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.768399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.768516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.768542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.768556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.768569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.768598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.778385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.778485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.778512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.778526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.778539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.778569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.788414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.788509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.788534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.788549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.788562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.788592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.798470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.798625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.798651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.798666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.798679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.798709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.808524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.808677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.808708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.808723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.808736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.808766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.818510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.818622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.818654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.818668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.818681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.818711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.828533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.828639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.828665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.398 [2024-07-21 08:33:32.828679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.398 [2024-07-21 08:33:32.828692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.398 [2024-07-21 08:33:32.828736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.398 qpair failed and we were unable to recover it. 00:37:23.398 [2024-07-21 08:33:32.838620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.398 [2024-07-21 08:33:32.838766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.398 [2024-07-21 08:33:32.838792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.838806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.838820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.838849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.848603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.848766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.848792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.848806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.848819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.848849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.858623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.858728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.858754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.858768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.858781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.858812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.868671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.868770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.868795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.868809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.868822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.868852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.878706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.878803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.878828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.878843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.878856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.878885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.888711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.888815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.888843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.888858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.888871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.888902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.898775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.898899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.898930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.898945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.898958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.898988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.908784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.908879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.908905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.908920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.908933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.908975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.918819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.918921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.918947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.918962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.918975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.919004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.928849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.928949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.928975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.928989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.929002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.929032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.938861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.938960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.938985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.938999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.939012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.939046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.948926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.949026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.949051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.949065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.949078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.949107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.958936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.959060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.959085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.959099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.959112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.959142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.968925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.969023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.969049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.969064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.969076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.969104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.399 qpair failed and we were unable to recover it. 00:37:23.399 [2024-07-21 08:33:32.978953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.399 [2024-07-21 08:33:32.979045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.399 [2024-07-21 08:33:32.979071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.399 [2024-07-21 08:33:32.979085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.399 [2024-07-21 08:33:32.979098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.399 [2024-07-21 08:33:32.979129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.400 qpair failed and we were unable to recover it. 00:37:23.400 [2024-07-21 08:33:32.989007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.400 [2024-07-21 08:33:32.989131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.400 [2024-07-21 08:33:32.989161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.400 [2024-07-21 08:33:32.989177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.400 [2024-07-21 08:33:32.989190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.400 [2024-07-21 08:33:32.989234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.400 qpair failed and we were unable to recover it. 00:37:23.400 [2024-07-21 08:33:32.999040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.400 [2024-07-21 08:33:32.999170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.400 [2024-07-21 08:33:32.999196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.400 [2024-07-21 08:33:32.999211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.400 [2024-07-21 08:33:32.999225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.400 [2024-07-21 08:33:32.999254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.400 qpair failed and we were unable to recover it. 00:37:23.400 [2024-07-21 08:33:33.009037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.400 [2024-07-21 08:33:33.009141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.400 [2024-07-21 08:33:33.009166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.400 [2024-07-21 08:33:33.009180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.400 [2024-07-21 08:33:33.009192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.400 [2024-07-21 08:33:33.009220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.400 qpair failed and we were unable to recover it. 00:37:23.400 [2024-07-21 08:33:33.019131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.400 [2024-07-21 08:33:33.019250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.400 [2024-07-21 08:33:33.019277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.400 [2024-07-21 08:33:33.019291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.400 [2024-07-21 08:33:33.019305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.400 [2024-07-21 08:33:33.019334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.400 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.029113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.029210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.029237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.029251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.029269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.029300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.039177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.039291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.039318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.039333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.039346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.039376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.049177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.049276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.049302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.049317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.049330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.049360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.059205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.059298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.059324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.059338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.059352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.059382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.069253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.069350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.069376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.069391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.069404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.069434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.079309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.079419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.079445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.079462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.079476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.079506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.089279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.089375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.089401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.089415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.089429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.089458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.099311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.099415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.099441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.099455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.099468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.099498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.109354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.109455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.109481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.109495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.109508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.109537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.119415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.119516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.119542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.119562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.119576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.119625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.129378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.129476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.129502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.129516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.129529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.129558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.139418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.139516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.139542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.139556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.139569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.139599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.149459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.149560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.149585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.149600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.149619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.658 [2024-07-21 08:33:33.149652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.658 qpair failed and we were unable to recover it. 00:37:23.658 [2024-07-21 08:33:33.159486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.658 [2024-07-21 08:33:33.159591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.658 [2024-07-21 08:33:33.159624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.658 [2024-07-21 08:33:33.159640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.658 [2024-07-21 08:33:33.159653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.159683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.169520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.169630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.169656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.169671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.169684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.169713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.179546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.179672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.179698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.179713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.179725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.179754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.189643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.189751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.189776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.189790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.189803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.189833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.199672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.199785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.199811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.199825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.199838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.199868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.209692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.209788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.209814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.209834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.209848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.209890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.219700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.219795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.219820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.219834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.219847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.219877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.229696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.229814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.229840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.229855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.229868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.229897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.239738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.239855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.239884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.239900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.239913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.239956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.249764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.249871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.249898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.249913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.249926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.249956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.259760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.259857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.259882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.259896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.259909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.259938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.269828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.269939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.269964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.269979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.269992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.270022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.659 [2024-07-21 08:33:33.279857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.659 [2024-07-21 08:33:33.279957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.659 [2024-07-21 08:33:33.279982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.659 [2024-07-21 08:33:33.279996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.659 [2024-07-21 08:33:33.280009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.659 [2024-07-21 08:33:33.280038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.659 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.289894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.290001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.290027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.290041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.290054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.290085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.299884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.299980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.300011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.300026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.300040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.300070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.309909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.310009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.310035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.310049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.310061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.310091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.319977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.320077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.320102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.320117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.320130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.320159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.329959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.330071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.330097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.330111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.330124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.330154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.340038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.340140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.340166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.340180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.340193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.340242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.350034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.350150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.350176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.350190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.350203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.350232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.360079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.360181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.360206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.360220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.360234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.360264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.370111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.370256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.370282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.370296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.370309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.370339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.380137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.380234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.380260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.380274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.380287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.380318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.390142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.390236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.390267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.390282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.390297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.390327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.400217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.400332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.400357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.400371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.400384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.918 [2024-07-21 08:33:33.400413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.918 qpair failed and we were unable to recover it. 00:37:23.918 [2024-07-21 08:33:33.410229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.918 [2024-07-21 08:33:33.410331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.918 [2024-07-21 08:33:33.410358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.918 [2024-07-21 08:33:33.410372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.918 [2024-07-21 08:33:33.410385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.410414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.420249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.420346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.420372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.420386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.420399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.420429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.430282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.430385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.430410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.430425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.430443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.430474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.440338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.440456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.440482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.440496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.440509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.440550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.450328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.450433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.450459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.450473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.450486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.450515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.460422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.460524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.460549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.460564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.460577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.460606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.470381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.470480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.470505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.470519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.470532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.470562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.480428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.480555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.480581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.480595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.480610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.480653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.490466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.490565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.490591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.490605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.490629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.490672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.500468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.500563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.500590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.500604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.500627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.500659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.510531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.510640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.510666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.510681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.510696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.510727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.520543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.520655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.520682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.520702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.520717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.520746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.530594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.530710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.530736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.530751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.530764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.530793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:23.919 [2024-07-21 08:33:33.540663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.919 [2024-07-21 08:33:33.540805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.919 [2024-07-21 08:33:33.540830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.919 [2024-07-21 08:33:33.540845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.919 [2024-07-21 08:33:33.540858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:23.919 [2024-07-21 08:33:33.540888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:23.919 qpair failed and we were unable to recover it. 00:37:24.177 [2024-07-21 08:33:33.550641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.177 [2024-07-21 08:33:33.550788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.177 [2024-07-21 08:33:33.550814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.177 [2024-07-21 08:33:33.550828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.177 [2024-07-21 08:33:33.550842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.177 [2024-07-21 08:33:33.550871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.177 qpair failed and we were unable to recover it. 00:37:24.177 [2024-07-21 08:33:33.560673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.177 [2024-07-21 08:33:33.560776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.177 [2024-07-21 08:33:33.560801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.177 [2024-07-21 08:33:33.560816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.177 [2024-07-21 08:33:33.560829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.177 [2024-07-21 08:33:33.560858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.177 qpair failed and we were unable to recover it. 00:37:24.177 [2024-07-21 08:33:33.570676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.177 [2024-07-21 08:33:33.570775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.177 [2024-07-21 08:33:33.570800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.570814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.570828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.570859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.580716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.580810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.580837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.580852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.580865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.580895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.590797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.590940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.590969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.590984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.590997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.591029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.600797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.600899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.600925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.600940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.600953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.600982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.610835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.610967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.610992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.611013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.611027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.611057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.620830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.620922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.620947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.620961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.620974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.621003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.630893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.630991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.631017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.631031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.631044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.631073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.640951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.641060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.641086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.641100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.641113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.641143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.650944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.651051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.651077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.651091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.651104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.651133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.661016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.661117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.661143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.661158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.661171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.661200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.671009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.671109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.671136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.671152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.671166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.671199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.681053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.681156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.681182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.681197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.681211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.681240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.691070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.691184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.691212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.691227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.691240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.691269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.701064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.701166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.701198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.701213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.701226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.178 [2024-07-21 08:33:33.701255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.178 qpair failed and we were unable to recover it. 00:37:24.178 [2024-07-21 08:33:33.711091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.178 [2024-07-21 08:33:33.711188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.178 [2024-07-21 08:33:33.711214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.178 [2024-07-21 08:33:33.711228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.178 [2024-07-21 08:33:33.711242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.711271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.721156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.721263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.721289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.721304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.721317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.721347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.731149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.731250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.731275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.731289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.731302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.731332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.741227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.741324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.741350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.741364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.741376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.741411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.751227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.751329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.751354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.751368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.751381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.751413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.761285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.761392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.761420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.761435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.761448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.761489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.771290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.771411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.771438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.771453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.771466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.771496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.781337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.781463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.781489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.781503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.781517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.781546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.791329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.791429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.791461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.791475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.791489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.791518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.179 [2024-07-21 08:33:33.801353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.179 [2024-07-21 08:33:33.801457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.179 [2024-07-21 08:33:33.801483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.179 [2024-07-21 08:33:33.801498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.179 [2024-07-21 08:33:33.801512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.179 [2024-07-21 08:33:33.801541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.179 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.811366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.811501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.811527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.811544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.811556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.811586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.821419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.821526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.821554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.821568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.821581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.821634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.831424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.831521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.831548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.831562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.831580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.831611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.841484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.841596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.841630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.841646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.841659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.841702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.851491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.851592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.851627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.851643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.851656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.851686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.861524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.861630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.861656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.861670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.861683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.861713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.871554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.871693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.871720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.871735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.871747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.871790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.881592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.881712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.881739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.881753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.881766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.881795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.891642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.891779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.891805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.891820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.891833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.891874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.901653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.901747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.901772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.901786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.901800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.438 [2024-07-21 08:33:33.901830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.438 qpair failed and we were unable to recover it. 00:37:24.438 [2024-07-21 08:33:33.911669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.438 [2024-07-21 08:33:33.911793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.438 [2024-07-21 08:33:33.911818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.438 [2024-07-21 08:33:33.911833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.438 [2024-07-21 08:33:33.911846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.911876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.921749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.921851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.921877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.921891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.921910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.921942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.931756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.931856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.931881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.931895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.931908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.931939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.941769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.941902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.941930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.941945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.941958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.941987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.951824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.951923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.951949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.951963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.951976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.952007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.961859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.961960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.961986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.962000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.962013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.962042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.971853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.971953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.971979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.971993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.972006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.972034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.981862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.981964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.981989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.982004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.982017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.982046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:33.991900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:33.992013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:33.992038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:33.992053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:33.992066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:33.992109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:34.001965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:34.002077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:34.002106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:34.002121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:34.002134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:34.002165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:34.011950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:34.012056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:34.012081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:34.012100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:34.012113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:34.012142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:34.021981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:34.022079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:34.022106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:34.022121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:34.022134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:34.022163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:34.032001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.439 [2024-07-21 08:33:34.032099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.439 [2024-07-21 08:33:34.032126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.439 [2024-07-21 08:33:34.032140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.439 [2024-07-21 08:33:34.032153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.439 [2024-07-21 08:33:34.032182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.439 qpair failed and we were unable to recover it. 00:37:24.439 [2024-07-21 08:33:34.042022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.440 [2024-07-21 08:33:34.042116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.440 [2024-07-21 08:33:34.042142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.440 [2024-07-21 08:33:34.042156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.440 [2024-07-21 08:33:34.042169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.440 [2024-07-21 08:33:34.042198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.440 qpair failed and we were unable to recover it. 00:37:24.440 [2024-07-21 08:33:34.052100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.440 [2024-07-21 08:33:34.052201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.440 [2024-07-21 08:33:34.052227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.440 [2024-07-21 08:33:34.052241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.440 [2024-07-21 08:33:34.052254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.440 [2024-07-21 08:33:34.052285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.440 qpair failed and we were unable to recover it. 00:37:24.440 [2024-07-21 08:33:34.062127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.440 [2024-07-21 08:33:34.062227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.440 [2024-07-21 08:33:34.062254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.440 [2024-07-21 08:33:34.062269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.440 [2024-07-21 08:33:34.062281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.440 [2024-07-21 08:33:34.062325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.440 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.072118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.072219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.072246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.072261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.697 [2024-07-21 08:33:34.072275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.697 [2024-07-21 08:33:34.072310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.697 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.082184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.082303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.082329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.082343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.697 [2024-07-21 08:33:34.082356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.697 [2024-07-21 08:33:34.082386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.697 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.092201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.092350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.092379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.092393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.697 [2024-07-21 08:33:34.092406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.697 [2024-07-21 08:33:34.092438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.697 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.102226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.102325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.102359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.102378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.697 [2024-07-21 08:33:34.102391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.697 [2024-07-21 08:33:34.102422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.697 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.112279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.112379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.112407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.112421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.697 [2024-07-21 08:33:34.112434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.697 [2024-07-21 08:33:34.112477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.697 qpair failed and we were unable to recover it. 00:37:24.697 [2024-07-21 08:33:34.122274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.697 [2024-07-21 08:33:34.122375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.697 [2024-07-21 08:33:34.122401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.697 [2024-07-21 08:33:34.122415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.122428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.122457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.132300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.132430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.132456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.132470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.132484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.132513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.142348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.142447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.142472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.142486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.142499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.142546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.152343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.152468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.152494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.152508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.152521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.152551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.162442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.162553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.162578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.162592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.162606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.162644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.172427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.172542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.172581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.172596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.172609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.172646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.182438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.182561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.182587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.182601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.182622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.182654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.192500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.192655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.192686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.192702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.192715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.192745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.202502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.202602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.202636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.202652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.202664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.202693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.212540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.212641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.212666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.212681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.212694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.212723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.222536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.222642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.222668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.222682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.222695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.222724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.232601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.232721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.232747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.698 [2024-07-21 08:33:34.232761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.698 [2024-07-21 08:33:34.232774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.698 [2024-07-21 08:33:34.232822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.698 qpair failed and we were unable to recover it. 00:37:24.698 [2024-07-21 08:33:34.242632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.698 [2024-07-21 08:33:34.242737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.698 [2024-07-21 08:33:34.242762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.242777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.242790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.242820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.252638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.252740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.252765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.252780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.252793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.252823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.262682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.262785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.262811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.262825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.262838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.262868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.272688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.272787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.272813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.272828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.272840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.272870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.282737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.282853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.282879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.282894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.282907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.282936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.292756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.292852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.292879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.292893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.292906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.292935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.302798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.302896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.302922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.302936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.302949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.302979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.312829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.312930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.312957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.312971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.312984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.313013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.699 [2024-07-21 08:33:34.322852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.699 [2024-07-21 08:33:34.322956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.699 [2024-07-21 08:33:34.322981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.699 [2024-07-21 08:33:34.322995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.699 [2024-07-21 08:33:34.323014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.699 [2024-07-21 08:33:34.323044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.699 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.332884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.957 [2024-07-21 08:33:34.332985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.957 [2024-07-21 08:33:34.333012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.957 [2024-07-21 08:33:34.333027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.957 [2024-07-21 08:33:34.333040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.957 [2024-07-21 08:33:34.333070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.957 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.342912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.957 [2024-07-21 08:33:34.343037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.957 [2024-07-21 08:33:34.343063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.957 [2024-07-21 08:33:34.343077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.957 [2024-07-21 08:33:34.343090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.957 [2024-07-21 08:33:34.343121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.957 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.352945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.957 [2024-07-21 08:33:34.353049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.957 [2024-07-21 08:33:34.353075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.957 [2024-07-21 08:33:34.353090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.957 [2024-07-21 08:33:34.353103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.957 [2024-07-21 08:33:34.353132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.957 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.362966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.957 [2024-07-21 08:33:34.363089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.957 [2024-07-21 08:33:34.363116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.957 [2024-07-21 08:33:34.363130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.957 [2024-07-21 08:33:34.363142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.957 [2024-07-21 08:33:34.363171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.957 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.372962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.957 [2024-07-21 08:33:34.373072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.957 [2024-07-21 08:33:34.373100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.957 [2024-07-21 08:33:34.373115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.957 [2024-07-21 08:33:34.373128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.957 [2024-07-21 08:33:34.373158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.957 qpair failed and we were unable to recover it. 00:37:24.957 [2024-07-21 08:33:34.383060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.383168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.383197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.383212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.383225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.383256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.393024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.393118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.393144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.393158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.393170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.393199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.403063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.403163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.403190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.403204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.403217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.403247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.413115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.413232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.413258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.413279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.413293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.413323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.423150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.423248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.423274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.423288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.423301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.423331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.433177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.433299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.433325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.433340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.433353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.433382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.443201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.443304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.443330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.443344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.443358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.443386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.453283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.453376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.453401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.453416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.453430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.453471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.463254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.463371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.463397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.463411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.463424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.463454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.473277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.473404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.473430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.473444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.473457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.473487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.483369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.483522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.483548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.483563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.483576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.483607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.493330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.493458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.493484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.493498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.493511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.493540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.503359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.503458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.503489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.503504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.503517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.503546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.513408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.513531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.958 [2024-07-21 08:33:34.513556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.958 [2024-07-21 08:33:34.513570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.958 [2024-07-21 08:33:34.513583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.958 [2024-07-21 08:33:34.513620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.958 qpair failed and we were unable to recover it. 00:37:24.958 [2024-07-21 08:33:34.523427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.958 [2024-07-21 08:33:34.523547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.523572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.523586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.523600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.523637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.533454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.533565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.533591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.533605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.533628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.533659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.543502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.543599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.543634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.543649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.543662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.543697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.553504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.553606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.553642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.553656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.553670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.553712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.563568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.563722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.563749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.563763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.563776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.563805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.573576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.573691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.573717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.573731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.573745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.573774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:24.959 [2024-07-21 08:33:34.583633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.959 [2024-07-21 08:33:34.583737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.959 [2024-07-21 08:33:34.583766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.959 [2024-07-21 08:33:34.583781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.959 [2024-07-21 08:33:34.583794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:24.959 [2024-07-21 08:33:34.583825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:24.959 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.593645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.593745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.593779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.593795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.593808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.593839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.603681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.603787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.603814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.603828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.603841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.603873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.613720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.613829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.613855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.613870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.613883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.613926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.623711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.623816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.623842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.623856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.623870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.623899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.633725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.633827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.633852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.633866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.633879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.633915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.643793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.643896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.643922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.643937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.643951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.643981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.653799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.653900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.653927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.653941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.653954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.653984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.663820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.663931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.663959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.663973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.663991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.664023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.673815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.673912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.673938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.673952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.673966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.673995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.683920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.684026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.684060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.684075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.684088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.684118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.693926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.694047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.694073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.218 [2024-07-21 08:33:34.694087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.218 [2024-07-21 08:33:34.694100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.218 [2024-07-21 08:33:34.694129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.218 qpair failed and we were unable to recover it. 00:37:25.218 [2024-07-21 08:33:34.703920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.218 [2024-07-21 08:33:34.704017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.218 [2024-07-21 08:33:34.704043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.704058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.704071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.704103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.713940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.714033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.714058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.714073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.714087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.714117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.723987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.724092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.724118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.724132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.724151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.724182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.734068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.734212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.734237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.734252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.734264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.734305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.744067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.744162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.744191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.744206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.744219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.744249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.754097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.754236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.754262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.754276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.754289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.754320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.764082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.764183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.764209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.764223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.764236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.764266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.774152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.774261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.774288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.774302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.774315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.774345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.784163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.784267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.784295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.784310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.784323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.784352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.794153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.794250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.794276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.794290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.794304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.794333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.804254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.804401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.804429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.804446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.804459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.804489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.814214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.814315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.814341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.814361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.814375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.814405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.824295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.824413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.824439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.824453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.824466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.824495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.834295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.834399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.834426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.834440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.834454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.834496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.219 qpair failed and we were unable to recover it. 00:37:25.219 [2024-07-21 08:33:34.844394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.219 [2024-07-21 08:33:34.844515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.219 [2024-07-21 08:33:34.844542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.219 [2024-07-21 08:33:34.844556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.219 [2024-07-21 08:33:34.844569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.219 [2024-07-21 08:33:34.844627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.220 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.854353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.854458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.854485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.854500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.854513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.854542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.864381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.864485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.864511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.864526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.864539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.864568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.874436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.874579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.874608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.874639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.874656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.874688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.884486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.884642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.884669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.884684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.884697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.884727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.894505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.894604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.894638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.894653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.894667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.894696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.904513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.904623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.904649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.904669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.904683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.904713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.914547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.914664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.914691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.914705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.914721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.914751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.924539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.924660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.924687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.924701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.924714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.924744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.934632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.934758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.934784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.934798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.934812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.934841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.944590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.944699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.944726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.944741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.944754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.944799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.954619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.954713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.954739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.954753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.954766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.478 [2024-07-21 08:33:34.954796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.478 qpair failed and we were unable to recover it. 00:37:25.478 [2024-07-21 08:33:34.964676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.478 [2024-07-21 08:33:34.964790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.478 [2024-07-21 08:33:34.964816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.478 [2024-07-21 08:33:34.964830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.478 [2024-07-21 08:33:34.964843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:34.964872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:34.974677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:34.974770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:34.974796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:34.974810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:34.974822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:34.974851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:34.984711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:34.984807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:34.984832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:34.984846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:34.984859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:34.984888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:34.994750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:34.994871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:34.994902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:34.994917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:34.994930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:34.994972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.004812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.004946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.004971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.004985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.004998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.005036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.014838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.014966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.014990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.015004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.015016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.015044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.024847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.024977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.025003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.025017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.025030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.025060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.034851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.034947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.034973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.034987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.035001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.035035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.044911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.045014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.045039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.045054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.045067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.045096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.054902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.054998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.055023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.055038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.055051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.055081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.064932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.065028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.065054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.065068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.065081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.065111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.075068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.075160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.075185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.075199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.075212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.075242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.084993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.085095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.085126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.085141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.085154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.085183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.095015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.095140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.095166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.095180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.095193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.095223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.479 [2024-07-21 08:33:35.105101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.479 [2024-07-21 08:33:35.105202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.479 [2024-07-21 08:33:35.105228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.479 [2024-07-21 08:33:35.105242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.479 [2024-07-21 08:33:35.105254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.479 [2024-07-21 08:33:35.105283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.479 qpair failed and we were unable to recover it. 00:37:25.737 [2024-07-21 08:33:35.115072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.737 [2024-07-21 08:33:35.115165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.737 [2024-07-21 08:33:35.115191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.737 [2024-07-21 08:33:35.115205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.737 [2024-07-21 08:33:35.115217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.737 [2024-07-21 08:33:35.115247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.737 qpair failed and we were unable to recover it. 00:37:25.737 [2024-07-21 08:33:35.125142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.737 [2024-07-21 08:33:35.125250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.737 [2024-07-21 08:33:35.125275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.737 [2024-07-21 08:33:35.125290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.737 [2024-07-21 08:33:35.125308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.737 [2024-07-21 08:33:35.125340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.737 qpair failed and we were unable to recover it. 00:37:25.737 [2024-07-21 08:33:35.135134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.737 [2024-07-21 08:33:35.135230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.737 [2024-07-21 08:33:35.135256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.737 [2024-07-21 08:33:35.135270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.737 [2024-07-21 08:33:35.135284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.737 [2024-07-21 08:33:35.135313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.737 qpair failed and we were unable to recover it. 00:37:25.737 [2024-07-21 08:33:35.145164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.737 [2024-07-21 08:33:35.145255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.737 [2024-07-21 08:33:35.145280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.737 [2024-07-21 08:33:35.145295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.737 [2024-07-21 08:33:35.145308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.737 [2024-07-21 08:33:35.145338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.737 qpair failed and we were unable to recover it. 00:37:25.737 [2024-07-21 08:33:35.155215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.737 [2024-07-21 08:33:35.155316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.155342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.155356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.155369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.155398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.165236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.165388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.165414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.165428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.165442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.165470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.175246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.175349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.175375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.175389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.175403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.175432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.185270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.185367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.185393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.185407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.185419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.185450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.195290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.195391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.195417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.195431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.195446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.195476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.205330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.205434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.205459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.205474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.205487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.205516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.215378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.215479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.215504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.215524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.215538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.215568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.225415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.225506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.225531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.225545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.225558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.225588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.235431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.235535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.235560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.235573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.235586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.235621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.245479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.245599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.245633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.245649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.245662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.245693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.255479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.255583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.255608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.255632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.255646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.255675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.265537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.265655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.265681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.265695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.265708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.265751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.275539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.275645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.738 [2024-07-21 08:33:35.275674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.738 [2024-07-21 08:33:35.275688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.738 [2024-07-21 08:33:35.275701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.738 [2024-07-21 08:33:35.275731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.738 qpair failed and we were unable to recover it. 00:37:25.738 [2024-07-21 08:33:35.285584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.738 [2024-07-21 08:33:35.285700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.285726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.285741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.285754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.285784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.295587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.295691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.295717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.295731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.295745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.295774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.305651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.305753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.305780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.305801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.305815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.305845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.315698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.315800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.315826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.315840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.315853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.315883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.325784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.325894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.325919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.325933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.325947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.325977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.335731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.335829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.335857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.335872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.335885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.335916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.345749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.345874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.345900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.345914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.345927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.345956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.355789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.355916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.355942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.355956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.355970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.356000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:25.739 [2024-07-21 08:33:35.365803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.739 [2024-07-21 08:33:35.365913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.739 [2024-07-21 08:33:35.365938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.739 [2024-07-21 08:33:35.365952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.739 [2024-07-21 08:33:35.365965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:25.739 [2024-07-21 08:33:35.365994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:25.739 qpair failed and we were unable to recover it. 00:37:26.013 [2024-07-21 08:33:35.375891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.013 [2024-07-21 08:33:35.375996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.013 [2024-07-21 08:33:35.376023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.013 [2024-07-21 08:33:35.376037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.013 [2024-07-21 08:33:35.376051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.013 [2024-07-21 08:33:35.376080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.013 qpair failed and we were unable to recover it. 00:37:26.013 [2024-07-21 08:33:35.385881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.013 [2024-07-21 08:33:35.385998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.013 [2024-07-21 08:33:35.386024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.013 [2024-07-21 08:33:35.386039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.013 [2024-07-21 08:33:35.386052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.013 [2024-07-21 08:33:35.386080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.013 qpair failed and we were unable to recover it. 00:37:26.013 [2024-07-21 08:33:35.395898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.013 [2024-07-21 08:33:35.395999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.396029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.396045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.396058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.396087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.405897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.406001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.406027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.406041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.406055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.406083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.415963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.416065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.416091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.416105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.416118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.416147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.425963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.426069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.426097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.426111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.426124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.426155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.435993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.436091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.436116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.436130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.436142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.436180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.446022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.446124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.446150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.446164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.446178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.446207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.456104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.456208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.456234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.456248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.456262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.456291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.466094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.466196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.466222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.466236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.466249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.466278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.476145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.476247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.476273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.476287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.476300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.476329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.486130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.486252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.486282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.486297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.486310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.486340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.496177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.496274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.496300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.496314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.496328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.496357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.506201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.506340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.506366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.506381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.506394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.506425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.516207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.516299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.516325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.516340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.516352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.516395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.526247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.526395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.526420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.526434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.014 [2024-07-21 08:33:35.526453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.014 [2024-07-21 08:33:35.526496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.014 qpair failed and we were unable to recover it. 00:37:26.014 [2024-07-21 08:33:35.536288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.014 [2024-07-21 08:33:35.536424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.014 [2024-07-21 08:33:35.536450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.014 [2024-07-21 08:33:35.536464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.536477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.536506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.546285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.546386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.546413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.546427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.546440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.546471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.556344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.556443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.556469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.556483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.556496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.556526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.566369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.566477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.566503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.566517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.566530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.566559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.576415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.576532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.576561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.576578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.576591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.576632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.586436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.586533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.586560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.586577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.586590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.586632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.596471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.596568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.596595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.596610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.596634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.596665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.606469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.606576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.606602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.606627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.606643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.606673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.616515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.616711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.616738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.616753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.616771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.616802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.626539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.626649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.626675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.626689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.626703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.626734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.015 [2024-07-21 08:33:35.636589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.015 [2024-07-21 08:33:35.636698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.015 [2024-07-21 08:33:35.636725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.015 [2024-07-21 08:33:35.636739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.015 [2024-07-21 08:33:35.636752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.015 [2024-07-21 08:33:35.636781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.015 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.646596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.646713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.646742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.646757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.646770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.646813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.656638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.656764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.656790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.656804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.656818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.656847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.666654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.666754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.666780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.666794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.666807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.666836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.676660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.676756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.676781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.676796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.676811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.676840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.686742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.686857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.686884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.686899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.686912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.686955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.696740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.696861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.696888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.696903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.696916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.696945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.706767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.706861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.706886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.706906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.706920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.706950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.273 qpair failed and we were unable to recover it. 00:37:26.273 [2024-07-21 08:33:35.716819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.273 [2024-07-21 08:33:35.716953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.273 [2024-07-21 08:33:35.716980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.273 [2024-07-21 08:33:35.716994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.273 [2024-07-21 08:33:35.717007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.273 [2024-07-21 08:33:35.717037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.726835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.726935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.726960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.726974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.726987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.727016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.736847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.736948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.736973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.736988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.737001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.737031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.746878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.746977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.747003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.747017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.747031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.747060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.756898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.756989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.757015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.757029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.757042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.757071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.766952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.767058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.767084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.767098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.767111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.767142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.777007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.777150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.777177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.777191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.777204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.777234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.786989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.787081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.787108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.787122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.787136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.787166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.796979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.797079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.797109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.797124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.797138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.797169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.807079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.807183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.807208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.807222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.807235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.807265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.817124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.817220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.817246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.817261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.817274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.817303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.827093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.827189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.827216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.827230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.827243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.827272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.837141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.837237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.837263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.837277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.837290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.837325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.847149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.847254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.847283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.847297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.847311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.847341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.857215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.857313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.274 [2024-07-21 08:33:35.857340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.274 [2024-07-21 08:33:35.857355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.274 [2024-07-21 08:33:35.857368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.274 [2024-07-21 08:33:35.857397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.274 qpair failed and we were unable to recover it. 00:37:26.274 [2024-07-21 08:33:35.867228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.274 [2024-07-21 08:33:35.867362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.275 [2024-07-21 08:33:35.867388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.275 [2024-07-21 08:33:35.867403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.275 [2024-07-21 08:33:35.867416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.275 [2024-07-21 08:33:35.867445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.275 qpair failed and we were unable to recover it. 00:37:26.275 [2024-07-21 08:33:35.877257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.275 [2024-07-21 08:33:35.877387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.275 [2024-07-21 08:33:35.877413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.275 [2024-07-21 08:33:35.877428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.275 [2024-07-21 08:33:35.877442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.275 [2024-07-21 08:33:35.877471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.275 qpair failed and we were unable to recover it. 00:37:26.275 [2024-07-21 08:33:35.887298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.275 [2024-07-21 08:33:35.887399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.275 [2024-07-21 08:33:35.887430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.275 [2024-07-21 08:33:35.887445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.275 [2024-07-21 08:33:35.887458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.275 [2024-07-21 08:33:35.887487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.275 qpair failed and we were unable to recover it. 00:37:26.275 [2024-07-21 08:33:35.897310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.275 [2024-07-21 08:33:35.897412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.275 [2024-07-21 08:33:35.897438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.275 [2024-07-21 08:33:35.897452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.275 [2024-07-21 08:33:35.897465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.275 [2024-07-21 08:33:35.897494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.275 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.907307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.907453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.907479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.907493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.907507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.907543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.917346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.917443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.917469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.917483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.917496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.917526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.927408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.927560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.927586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.927600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.927621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.927659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.937402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.937501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.937527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.937541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.937555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.937584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.947505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.947602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.947638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.947654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.947667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.947711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.957464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.957561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.957587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.957601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.957626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.957659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.967525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.967645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.967671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.967686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.967698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.967728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.977549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.977703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.977730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.977744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.977756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.977786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.987571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.987687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.987714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.987728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.987741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.987771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:35.997634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:35.997778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:35.997804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:35.997818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:35.997831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:35.997861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:36.007601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:36.007709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:36.007734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:36.007748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:36.007761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:36.007792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.533 [2024-07-21 08:33:36.017660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.533 [2024-07-21 08:33:36.017801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.533 [2024-07-21 08:33:36.017825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.533 [2024-07-21 08:33:36.017839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.533 [2024-07-21 08:33:36.017857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.533 [2024-07-21 08:33:36.017886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.533 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.027698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.027796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.027823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.027837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.027850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.027892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.037714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.037817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.037844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.037858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.037871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.037901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.047758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.047865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.047891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.047905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.047918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.047948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.057784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.057906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.057933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.057947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.057960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.057989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.067827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.067948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.067973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.067987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.068000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.068030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.077822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.077931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.077959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.077974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.077987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.078018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.087893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.088031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.088058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.088072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.088086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.088117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.097896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.097995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.098022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.098036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.098049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.098080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.107888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.107982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.108008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.108029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.108043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.108074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.117903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.117998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.118024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.118039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.118052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.118082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.127982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.128086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.128112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.128126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.128139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.128181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.138053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.138156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.534 [2024-07-21 08:33:36.138183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.534 [2024-07-21 08:33:36.138197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.534 [2024-07-21 08:33:36.138210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.534 [2024-07-21 08:33:36.138240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.534 qpair failed and we were unable to recover it. 00:37:26.534 [2024-07-21 08:33:36.148052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.534 [2024-07-21 08:33:36.148153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.535 [2024-07-21 08:33:36.148179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.535 [2024-07-21 08:33:36.148194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.535 [2024-07-21 08:33:36.148207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.535 [2024-07-21 08:33:36.148238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.535 qpair failed and we were unable to recover it. 00:37:26.535 [2024-07-21 08:33:36.158022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.535 [2024-07-21 08:33:36.158122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.535 [2024-07-21 08:33:36.158148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.535 [2024-07-21 08:33:36.158162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.535 [2024-07-21 08:33:36.158177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.535 [2024-07-21 08:33:36.158207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.535 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.168096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.168203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.168229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.168244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.168257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.168287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.178119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.178215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.178241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.178255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.178268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.178297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.188102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.188229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.188255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.188270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.188283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.188314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.198191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.198292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.198326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.198341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.198354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.198396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.208183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.208287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.208312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.208327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.208339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.208370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.218211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.218338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.218364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.218379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.218392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.218434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.228231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.228335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.228361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.228375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.228388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.228417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.238243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.238339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.238365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.238379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.238392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.238427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.248299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.248427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.248453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.248467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.248480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.248509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.793 qpair failed and we were unable to recover it. 00:37:26.793 [2024-07-21 08:33:36.258323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.793 [2024-07-21 08:33:36.258424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.793 [2024-07-21 08:33:36.258449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.793 [2024-07-21 08:33:36.258464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.793 [2024-07-21 08:33:36.258477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.793 [2024-07-21 08:33:36.258506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.268339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.268444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.268473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.268489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.268502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.268532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.278348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.278439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.278466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.278480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.278493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.278521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.288402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.288512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.288543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.288560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.288573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.288602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.298422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.298522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.298548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.298562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.298575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.298605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.308457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.308564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.308590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.308605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.308626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.308657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.318477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.318573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.318598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.318619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.318634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.318664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.328533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.328648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.328674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.328689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.328702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7dc000b90 00:37:26.794 [2024-07-21 08:33:36.328737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.338558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.338666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.338699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.338715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.338729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7d4000b90 00:37:26.794 [2024-07-21 08:33:36.338760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.348580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.348694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.348721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.348736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.348750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7d4000b90 00:37:26.794 [2024-07-21 08:33:36.348780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.358641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.358748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.358775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.358790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.358803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7d4000b90 00:37:26.794 [2024-07-21 08:33:36.358834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.368648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.368757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.368789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.368805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.368818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7e4000b90 00:37:26.794 [2024-07-21 08:33:36.368849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.378657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.794 [2024-07-21 08:33:36.378757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.794 [2024-07-21 08:33:36.378789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.794 [2024-07-21 08:33:36.378804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.794 [2024-07-21 08:33:36.378818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7e4000b90 00:37:26.794 [2024-07-21 08:33:36.378848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.794 qpair failed and we were unable to recover it. 00:37:26.794 [2024-07-21 08:33:36.378949] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:26.794 A controller has encountered a failure and is being reset. 00:37:27.052 Controller properly reset. 00:37:27.052 Initializing NVMe Controllers 00:37:27.052 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:27.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:27.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:27.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:27.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:27.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:27.052 Initialization complete. Launching workers. 00:37:27.052 Starting thread on core 1 00:37:27.052 Starting thread on core 2 00:37:27.052 Starting thread on core 3 00:37:27.052 Starting thread on core 0 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:27.052 00:37:27.052 real 0m10.919s 00:37:27.052 user 0m18.128s 00:37:27.052 sys 0m5.508s 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:27.052 ************************************ 00:37:27.052 END TEST nvmf_target_disconnect_tc2 00:37:27.052 ************************************ 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:27.052 rmmod nvme_tcp 00:37:27.052 rmmod nvme_fabrics 00:37:27.052 rmmod nvme_keyring 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 85017 ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 85017 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 85017 ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 85017 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85017 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85017' 00:37:27.052 killing process with pid 85017 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 85017 00:37:27.052 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 85017 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:27.310 08:33:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.210 08:33:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:29.210 00:37:29.210 real 0m15.621s 00:37:29.210 user 0m44.836s 00:37:29.210 sys 0m7.410s 00:37:29.210 08:33:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:29.210 08:33:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.210 ************************************ 00:37:29.210 END TEST nvmf_target_disconnect 00:37:29.210 ************************************ 00:37:29.210 08:33:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:29.210 08:33:38 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:29.210 08:33:38 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.210 08:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.469 08:33:38 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:29.469 00:37:29.469 real 27m5.074s 00:37:29.469 user 73m53.114s 00:37:29.469 sys 6m21.608s 00:37:29.469 08:33:38 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:29.469 08:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.469 ************************************ 00:37:29.469 END TEST nvmf_tcp 00:37:29.469 ************************************ 00:37:29.469 08:33:38 -- common/autotest_common.sh@1142 -- # return 0 00:37:29.469 08:33:38 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:29.469 08:33:38 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:29.469 08:33:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:29.469 08:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:29.469 08:33:38 -- common/autotest_common.sh@10 -- # set +x 00:37:29.469 ************************************ 00:37:29.469 START TEST spdkcli_nvmf_tcp 00:37:29.469 ************************************ 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:29.469 * Looking for test storage... 00:37:29.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=86214 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 86214 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 86214 ']' 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:29.469 08:33:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.469 [2024-07-21 08:33:39.011609] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:37:29.469 [2024-07-21 08:33:39.011717] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86214 ] 00:37:29.469 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.469 [2024-07-21 08:33:39.068408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:29.728 [2024-07-21 08:33:39.155892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.728 [2024-07-21 08:33:39.155895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.728 08:33:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:29.728 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:29.728 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:29.728 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:29.728 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:29.728 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:29.728 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:29.728 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:29.728 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:29.728 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:29.728 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:29.728 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:29.728 ' 00:37:32.255 [2024-07-21 08:33:41.829876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.656 [2024-07-21 08:33:43.062175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:36.177 [2024-07-21 08:33:45.337449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:38.073 [2024-07-21 08:33:47.287601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:39.444 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:39.444 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:39.444 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.444 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.444 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:39.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:39.444 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:39.444 08:33:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:40.006 08:33:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:40.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:40.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:40.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:40.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:40.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:40.006 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:40.006 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:40.006 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:40.006 ' 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:45.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:45.258 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:45.258 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:45.258 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 86214 ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86214' 00:37:45.258 killing process with pid 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 86214 ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 86214 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 86214 ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 86214 00:37:45.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (86214) - No such process 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 86214 is not found' 00:37:45.258 Process with pid 86214 is not found 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:45.258 00:37:45.258 real 0m15.950s 00:37:45.258 user 0m33.789s 00:37:45.258 sys 0m0.793s 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:45.258 08:33:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.258 ************************************ 00:37:45.258 END TEST spdkcli_nvmf_tcp 00:37:45.258 ************************************ 00:37:45.258 08:33:54 -- common/autotest_common.sh@1142 -- # return 0 00:37:45.258 08:33:54 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:45.258 08:33:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:45.258 08:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:45.258 08:33:54 -- common/autotest_common.sh@10 -- # set +x 00:37:45.515 ************************************ 00:37:45.515 START TEST nvmf_identify_passthru 00:37:45.515 ************************************ 00:37:45.515 08:33:54 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:45.515 * Looking for test storage... 00:37:45.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:45.516 08:33:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:45.516 08:33:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:45.516 08:33:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.516 08:33:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.516 08:33:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:45.516 08:33:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:45.516 08:33:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:45.516 08:33:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:47.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:47.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:47.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:47.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:47.413 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:47.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:37:47.414 00:37:47.414 --- 10.0.0.2 ping statistics --- 00:37:47.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.414 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:47.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:37:47.414 00:37:47.414 --- 10.0.0.1 ping statistics --- 00:37:47.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.414 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:47.414 08:33:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:47.414 08:33:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.414 08:33:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:47.414 08:33:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:37:47.414 08:33:57 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:37:47.672 08:33:57 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:37:47.672 08:33:57 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:37:47.672 08:33:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:37:47.672 08:33:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:37:47.672 08:33:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:47.672 08:33:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:47.672 08:33:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:47.672 EAL: No free 2048 kB hugepages reported on node 1 00:37:51.887 08:34:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:37:51.887 08:34:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:51.887 08:34:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:51.887 08:34:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:51.887 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=90707 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:56.065 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 90707 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 90707 ']' 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:56.065 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.065 [2024-07-21 08:34:05.572472] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:37:56.065 [2024-07-21 08:34:05.572562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.065 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.065 [2024-07-21 08:34:05.639131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:56.322 [2024-07-21 08:34:05.729041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.322 [2024-07-21 08:34:05.729095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.322 [2024-07-21 08:34:05.729110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.322 [2024-07-21 08:34:05.729122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.322 [2024-07-21 08:34:05.729152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.322 [2024-07-21 08:34:05.729222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.322 [2024-07-21 08:34:05.729484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.322 [2024-07-21 08:34:05.729549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.322 [2024-07-21 08:34:05.729546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:56.322 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:56.322 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:37:56.323 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.323 INFO: Log level set to 20 00:37:56.323 INFO: Requests: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "method": "nvmf_set_config", 00:37:56.323 "id": 1, 00:37:56.323 "params": { 00:37:56.323 "admin_cmd_passthru": { 00:37:56.323 "identify_ctrlr": true 00:37:56.323 } 00:37:56.323 } 00:37:56.323 } 00:37:56.323 00:37:56.323 INFO: response: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "id": 1, 00:37:56.323 "result": true 00:37:56.323 } 00:37:56.323 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.323 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.323 INFO: Setting log level to 20 00:37:56.323 INFO: Setting log level to 20 00:37:56.323 INFO: Log level set to 20 00:37:56.323 INFO: Log level set to 20 00:37:56.323 INFO: Requests: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "method": "framework_start_init", 00:37:56.323 "id": 1 00:37:56.323 } 00:37:56.323 00:37:56.323 INFO: Requests: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "method": "framework_start_init", 00:37:56.323 "id": 1 00:37:56.323 } 00:37:56.323 00:37:56.323 [2024-07-21 08:34:05.899792] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:56.323 INFO: response: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "id": 1, 00:37:56.323 "result": true 00:37:56.323 } 00:37:56.323 00:37:56.323 INFO: response: 00:37:56.323 { 00:37:56.323 "jsonrpc": "2.0", 00:37:56.323 "id": 1, 00:37:56.323 "result": true 00:37:56.323 } 00:37:56.323 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.323 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.323 INFO: Setting log level to 40 00:37:56.323 INFO: Setting log level to 40 00:37:56.323 INFO: Setting log level to 40 00:37:56.323 [2024-07-21 08:34:05.909789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:56.323 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.323 08:34:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:56.323 08:34:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 Nvme0n1 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 [2024-07-21 08:34:08.796022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 [ 00:37:59.651 { 00:37:59.651 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:59.651 "subtype": "Discovery", 00:37:59.651 "listen_addresses": [], 00:37:59.651 "allow_any_host": true, 00:37:59.651 "hosts": [] 00:37:59.651 }, 00:37:59.651 { 00:37:59.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.651 "subtype": "NVMe", 00:37:59.651 "listen_addresses": [ 00:37:59.651 { 00:37:59.651 "trtype": "TCP", 00:37:59.651 "adrfam": "IPv4", 00:37:59.651 "traddr": "10.0.0.2", 00:37:59.651 "trsvcid": "4420" 00:37:59.651 } 00:37:59.651 ], 00:37:59.651 "allow_any_host": true, 00:37:59.651 "hosts": [], 00:37:59.651 "serial_number": "SPDK00000000000001", 00:37:59.651 "model_number": "SPDK bdev Controller", 00:37:59.651 "max_namespaces": 1, 00:37:59.651 "min_cntlid": 1, 00:37:59.651 "max_cntlid": 65519, 00:37:59.651 "namespaces": [ 00:37:59.651 { 00:37:59.651 "nsid": 1, 00:37:59.651 "bdev_name": "Nvme0n1", 00:37:59.651 "name": "Nvme0n1", 00:37:59.651 "nguid": "1B12B2C733C345F89B925AB6F07A025A", 00:37:59.651 "uuid": "1b12b2c7-33c3-45f8-9b92-5ab6f07a025a" 00:37:59.651 } 00:37:59.651 ] 00:37:59.651 } 00:37:59.651 ] 00:37:59.651 08:34:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:59.651 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:59.651 08:34:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:59.651 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.651 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.651 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.651 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:59.651 08:34:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:59.651 rmmod nvme_tcp 00:37:59.651 rmmod nvme_fabrics 00:37:59.651 rmmod nvme_keyring 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:59.651 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:59.652 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 90707 ']' 00:37:59.652 08:34:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 90707 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 90707 ']' 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 90707 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90707 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90707' 00:37:59.652 killing process with pid 90707 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 90707 00:37:59.652 08:34:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 90707 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:01.551 08:34:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.551 08:34:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:01.551 08:34:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.450 08:34:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:03.450 00:38:03.450 real 0m17.890s 00:38:03.450 user 0m26.496s 00:38:03.450 sys 0m2.227s 00:38:03.450 08:34:12 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:03.450 08:34:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:03.450 ************************************ 00:38:03.450 END TEST nvmf_identify_passthru 00:38:03.450 ************************************ 00:38:03.450 08:34:12 -- common/autotest_common.sh@1142 -- # return 0 00:38:03.450 08:34:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:03.450 08:34:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:03.450 08:34:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:03.450 08:34:12 -- common/autotest_common.sh@10 -- # set +x 00:38:03.450 ************************************ 00:38:03.450 START TEST nvmf_dif 00:38:03.450 ************************************ 00:38:03.450 08:34:12 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:03.450 * Looking for test storage... 00:38:03.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.450 08:34:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.450 08:34:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.450 08:34:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.450 08:34:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.450 08:34:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.450 08:34:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.450 08:34:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.451 08:34:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.451 08:34:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:03.451 08:34:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:03.451 08:34:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:03.451 08:34:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:03.451 08:34:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:03.451 08:34:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:03.451 08:34:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.451 08:34:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:03.451 08:34:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:03.451 08:34:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:03.451 08:34:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:05.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.346 08:34:14 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:05.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:05.347 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:05.347 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:05.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:38:05.347 00:38:05.347 --- 10.0.0.2 ping statistics --- 00:38:05.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.347 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:38:05.347 00:38:05.347 --- 10.0.0.1 ping statistics --- 00:38:05.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.347 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:05.347 08:34:14 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:06.719 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:06.719 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:06.719 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:06.719 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:06.719 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:06.719 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:06.719 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:06.719 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:06.719 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:06.719 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:06.719 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:06.719 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:06.719 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:06.719 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:06.719 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:06.719 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:06.719 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:06.719 08:34:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:06.719 08:34:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=93988 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:06.719 08:34:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 93988 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 93988 ']' 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:06.719 08:34:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.719 [2024-07-21 08:34:16.228511] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:38:06.719 [2024-07-21 08:34:16.228579] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.719 EAL: No free 2048 kB hugepages reported on node 1 00:38:06.719 [2024-07-21 08:34:16.290712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.976 [2024-07-21 08:34:16.382279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.977 [2024-07-21 08:34:16.382340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.977 [2024-07-21 08:34:16.382369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.977 [2024-07-21 08:34:16.382381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.977 [2024-07-21 08:34:16.382392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.977 [2024-07-21 08:34:16.382427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:06.977 08:34:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 08:34:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:06.977 08:34:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:06.977 08:34:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 [2024-07-21 08:34:16.529715] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.977 08:34:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 ************************************ 00:38:06.977 START TEST fio_dif_1_default 00:38:06.977 ************************************ 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 bdev_null0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:06.977 [2024-07-21 08:34:16.590029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:06.977 { 00:38:06.977 "params": { 00:38:06.977 "name": "Nvme$subsystem", 00:38:06.977 "trtype": "$TEST_TRANSPORT", 00:38:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.977 "adrfam": "ipv4", 00:38:06.977 "trsvcid": "$NVMF_PORT", 00:38:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.977 "hdgst": ${hdgst:-false}, 00:38:06.977 "ddgst": ${ddgst:-false} 00:38:06.977 }, 00:38:06.977 "method": "bdev_nvme_attach_controller" 00:38:06.977 } 00:38:06.977 EOF 00:38:06.977 )") 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:06.977 08:34:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:06.977 "params": { 00:38:06.977 "name": "Nvme0", 00:38:06.977 "trtype": "tcp", 00:38:06.977 "traddr": "10.0.0.2", 00:38:06.977 "adrfam": "ipv4", 00:38:06.977 "trsvcid": "4420", 00:38:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:06.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:06.977 "hdgst": false, 00:38:06.977 "ddgst": false 00:38:06.977 }, 00:38:06.977 "method": "bdev_nvme_attach_controller" 00:38:06.977 }' 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:07.234 08:34:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.234 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:07.234 fio-3.35 00:38:07.234 Starting 1 thread 00:38:07.489 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.667 00:38:19.667 filename0: (groupid=0, jobs=1): err= 0: pid=94210: Sun Jul 21 08:34:27 2024 00:38:19.667 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:38:19.667 slat (nsec): min=3935, max=58863, avg=9462.65, stdev=3184.44 00:38:19.667 clat (usec): min=40858, max=47705, avg=41006.24, stdev=437.88 00:38:19.667 lat (usec): min=40866, max=47721, avg=41015.70, stdev=437.93 00:38:19.667 clat percentiles (usec): 00:38:19.667 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:19.667 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:19.667 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:19.667 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:38:19.667 | 99.99th=[47449] 00:38:19.667 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:38:19.667 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:19.667 lat (msec) : 50=100.00% 00:38:19.667 cpu : usr=90.35%, sys=9.39%, ctx=21, majf=0, minf=247 00:38:19.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:19.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.667 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:19.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:19.667 00:38:19.667 Run status group 0 (all jobs): 00:38:19.667 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 00:38:19.667 real 0m10.982s 00:38:19.667 user 0m10.090s 00:38:19.667 sys 0m1.211s 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 ************************************ 00:38:19.667 END TEST fio_dif_1_default 00:38:19.667 ************************************ 00:38:19.667 08:34:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:19.667 08:34:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:19.667 08:34:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:19.667 08:34:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 ************************************ 00:38:19.667 START TEST fio_dif_1_multi_subsystems 00:38:19.667 ************************************ 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 bdev_null0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 [2024-07-21 08:34:27.616703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 bdev_null1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.667 { 00:38:19.667 "params": { 00:38:19.667 "name": "Nvme$subsystem", 00:38:19.667 "trtype": "$TEST_TRANSPORT", 00:38:19.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.667 "adrfam": "ipv4", 00:38:19.667 "trsvcid": "$NVMF_PORT", 00:38:19.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.667 "hdgst": ${hdgst:-false}, 00:38:19.667 "ddgst": ${ddgst:-false} 00:38:19.667 }, 00:38:19.667 "method": "bdev_nvme_attach_controller" 00:38:19.667 } 00:38:19.667 EOF 00:38:19.667 )") 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:19.667 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.668 { 00:38:19.668 "params": { 00:38:19.668 "name": "Nvme$subsystem", 00:38:19.668 "trtype": "$TEST_TRANSPORT", 00:38:19.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.668 "adrfam": "ipv4", 00:38:19.668 "trsvcid": "$NVMF_PORT", 00:38:19.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.668 "hdgst": ${hdgst:-false}, 00:38:19.668 "ddgst": ${ddgst:-false} 00:38:19.668 }, 00:38:19.668 "method": "bdev_nvme_attach_controller" 00:38:19.668 } 00:38:19.668 EOF 00:38:19.668 )") 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:19.668 "params": { 00:38:19.668 "name": "Nvme0", 00:38:19.668 "trtype": "tcp", 00:38:19.668 "traddr": "10.0.0.2", 00:38:19.668 "adrfam": "ipv4", 00:38:19.668 "trsvcid": "4420", 00:38:19.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.668 "hdgst": false, 00:38:19.668 "ddgst": false 00:38:19.668 }, 00:38:19.668 "method": "bdev_nvme_attach_controller" 00:38:19.668 },{ 00:38:19.668 "params": { 00:38:19.668 "name": "Nvme1", 00:38:19.668 "trtype": "tcp", 00:38:19.668 "traddr": "10.0.0.2", 00:38:19.668 "adrfam": "ipv4", 00:38:19.668 "trsvcid": "4420", 00:38:19.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:19.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:19.668 "hdgst": false, 00:38:19.668 "ddgst": false 00:38:19.668 }, 00:38:19.668 "method": "bdev_nvme_attach_controller" 00:38:19.668 }' 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:19.668 08:34:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.668 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:19.668 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:19.668 fio-3.35 00:38:19.668 Starting 2 threads 00:38:19.668 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.654 00:38:29.654 filename0: (groupid=0, jobs=1): err= 0: pid=95509: Sun Jul 21 08:34:38 2024 00:38:29.654 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10018msec) 00:38:29.654 slat (nsec): min=7838, max=46828, avg=9900.99, stdev=3092.84 00:38:29.654 clat (usec): min=40866, max=45348, avg=41023.60, stdev=331.32 00:38:29.654 lat (usec): min=40874, max=45362, avg=41033.50, stdev=331.43 00:38:29.654 clat percentiles (usec): 00:38:29.654 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:29.654 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:29.654 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:29.654 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:38:29.654 | 99.99th=[45351] 00:38:29.654 bw ( KiB/s): min= 384, max= 416, per=40.35%, avg=388.80, stdev=11.72, samples=20 00:38:29.654 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:29.654 lat (msec) : 50=100.00% 00:38:29.654 cpu : usr=94.55%, sys=5.16%, ctx=59, majf=0, minf=180 00:38:29.654 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.654 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.654 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:29.654 filename1: (groupid=0, jobs=1): err= 0: pid=95510: Sun Jul 21 08:34:38 2024 00:38:29.654 read: IOPS=143, BW=573KiB/s (586kB/s)(5728KiB/10005msec) 00:38:29.654 slat (nsec): min=7511, max=21747, avg=9743.47, stdev=2528.80 00:38:29.654 clat (usec): min=576, max=45346, avg=27914.96, stdev=18995.07 00:38:29.654 lat (usec): min=584, max=45359, avg=27924.70, stdev=18995.04 00:38:29.654 clat percentiles (usec): 00:38:29.654 | 1.00th=[ 611], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 668], 00:38:29.654 | 30.00th=[ 717], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:29.654 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:29.654 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:38:29.654 | 99.99th=[45351] 00:38:29.654 bw ( KiB/s): min= 384, max= 768, per=59.39%, avg=571.20, stdev=181.99, samples=20 00:38:29.654 iops : min= 96, max= 192, avg=142.80, stdev=45.50, samples=20 00:38:29.654 lat (usec) : 750=31.84%, 1000=0.84% 00:38:29.654 lat (msec) : 50=67.32% 00:38:29.654 cpu : usr=94.45%, sys=5.27%, ctx=15, majf=0, minf=98 00:38:29.654 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.654 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.654 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:29.654 00:38:29.654 Run status group 0 (all jobs): 00:38:29.654 READ: bw=961KiB/s (985kB/s), 390KiB/s-573KiB/s (399kB/s-586kB/s), io=9632KiB (9863kB), run=10005-10018msec 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 00:38:29.654 real 0m11.337s 00:38:29.654 user 0m20.281s 00:38:29.654 sys 0m1.342s 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 ************************************ 00:38:29.654 END TEST fio_dif_1_multi_subsystems 00:38:29.654 ************************************ 00:38:29.654 08:34:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:29.654 08:34:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:29.654 08:34:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:29.654 08:34:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 ************************************ 00:38:29.654 START TEST fio_dif_rand_params 00:38:29.654 ************************************ 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 bdev_null0 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.654 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.655 08:34:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.655 [2024-07-21 08:34:38.997451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:29.655 { 00:38:29.655 "params": { 00:38:29.655 "name": "Nvme$subsystem", 00:38:29.655 "trtype": "$TEST_TRANSPORT", 00:38:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.655 "adrfam": "ipv4", 00:38:29.655 "trsvcid": "$NVMF_PORT", 00:38:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.655 "hdgst": ${hdgst:-false}, 00:38:29.655 "ddgst": ${ddgst:-false} 00:38:29.655 }, 00:38:29.655 "method": "bdev_nvme_attach_controller" 00:38:29.655 } 00:38:29.655 EOF 00:38:29.655 )") 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:29.655 "params": { 00:38:29.655 "name": "Nvme0", 00:38:29.655 "trtype": "tcp", 00:38:29.655 "traddr": "10.0.0.2", 00:38:29.655 "adrfam": "ipv4", 00:38:29.655 "trsvcid": "4420", 00:38:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.655 "hdgst": false, 00:38:29.655 "ddgst": false 00:38:29.655 }, 00:38:29.655 "method": "bdev_nvme_attach_controller" 00:38:29.655 }' 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:29.655 08:34:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.655 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:29.655 ... 00:38:29.655 fio-3.35 00:38:29.655 Starting 3 threads 00:38:29.913 EAL: No free 2048 kB hugepages reported on node 1 00:38:35.179 00:38:35.179 filename0: (groupid=0, jobs=1): err= 0: pid=96888: Sun Jul 21 08:34:44 2024 00:38:35.179 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5007msec) 00:38:35.179 slat (nsec): min=4766, max=26175, avg=13235.06, stdev=1659.44 00:38:35.179 clat (usec): min=4448, max=55324, avg=12870.40, stdev=8658.68 00:38:35.179 lat (usec): min=4460, max=55338, avg=12883.64, stdev=8658.58 00:38:35.179 clat percentiles (usec): 00:38:35.179 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[ 8979], 00:38:35.179 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:38:35.180 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15139], 95.00th=[17695], 00:38:35.180 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54264], 99.95th=[55313], 00:38:35.180 | 99.99th=[55313] 00:38:35.180 bw ( KiB/s): min=23040, max=37120, per=34.27%, avg=29772.80, stdev=3756.70, samples=10 00:38:35.180 iops : min= 180, max= 290, avg=232.60, stdev=29.35, samples=10 00:38:35.180 lat (msec) : 10=28.84%, 20=66.52%, 50=1.89%, 100=2.75% 00:38:35.180 cpu : usr=92.31%, sys=7.21%, ctx=15, majf=0, minf=44 00:38:35.180 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 issued rwts: total=1165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.180 filename0: (groupid=0, jobs=1): err= 0: pid=96889: Sun Jul 21 08:34:44 2024 00:38:35.180 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(146MiB/5046msec) 00:38:35.180 slat (nsec): min=5008, max=66384, avg=15219.27, stdev=3336.87 00:38:35.180 clat (usec): min=5039, max=56038, avg=12915.07, stdev=8137.40 00:38:35.180 lat (usec): min=5052, max=56056, avg=12930.29, stdev=8137.41 00:38:35.180 clat percentiles (usec): 00:38:35.180 | 1.00th=[ 5342], 5.00th=[ 6980], 10.00th=[ 8356], 20.00th=[ 9372], 00:38:35.180 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:38:35.180 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15139], 95.00th=[16581], 00:38:35.180 | 99.00th=[53216], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:38:35.180 | 99.99th=[55837] 00:38:35.180 bw ( KiB/s): min=22016, max=35840, per=34.33%, avg=29824.00, stdev=4533.52, samples=10 00:38:35.180 iops : min= 172, max= 280, avg=233.00, stdev=35.42, samples=10 00:38:35.180 lat (msec) : 10=26.48%, 20=69.24%, 50=2.40%, 100=1.89% 00:38:35.180 cpu : usr=92.07%, sys=7.27%, ctx=28, majf=0, minf=102 00:38:35.180 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.180 filename0: (groupid=0, jobs=1): err= 0: pid=96890: Sun Jul 21 08:34:44 2024 00:38:35.180 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5006msec) 00:38:35.180 slat (nsec): min=4968, max=30960, avg=13255.16, stdev=1697.60 00:38:35.180 clat (usec): min=4915, max=57120, avg=13718.07, stdev=8667.80 00:38:35.180 lat (usec): min=4927, max=57133, avg=13731.32, stdev=8667.67 00:38:35.180 clat percentiles (usec): 00:38:35.180 | 1.00th=[ 5211], 5.00th=[ 6718], 10.00th=[ 8291], 20.00th=[ 9765], 00:38:35.180 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:38:35.180 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15795], 95.00th=[17171], 00:38:35.180 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:38:35.180 | 99.99th=[56886] 00:38:35.180 bw ( KiB/s): min=24320, max=30208, per=32.15%, avg=27935.30, stdev=1972.36, samples=10 00:38:35.180 iops : min= 190, max= 236, avg=218.20, stdev=15.39, samples=10 00:38:35.180 lat (msec) : 10=21.77%, 20=73.56%, 50=2.10%, 100=2.56% 00:38:35.180 cpu : usr=93.99%, sys=5.57%, ctx=13, majf=0, minf=120 00:38:35.180 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.180 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.180 00:38:35.180 Run status group 0 (all jobs): 00:38:35.180 READ: bw=84.8MiB/s (89.0MB/s), 27.3MiB/s-29.1MiB/s (28.6MB/s-30.5MB/s), io=428MiB (449MB), run=5006-5046msec 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 bdev_null0 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 [2024-07-21 08:34:45.018688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 bdev_null1 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.437 bdev_null2 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.437 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:35.697 { 00:38:35.697 "params": { 00:38:35.697 "name": "Nvme$subsystem", 00:38:35.697 "trtype": "$TEST_TRANSPORT", 00:38:35.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.697 "adrfam": "ipv4", 00:38:35.697 "trsvcid": "$NVMF_PORT", 00:38:35.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.697 "hdgst": ${hdgst:-false}, 00:38:35.697 "ddgst": ${ddgst:-false} 00:38:35.697 }, 00:38:35.697 "method": "bdev_nvme_attach_controller" 00:38:35.697 } 00:38:35.697 EOF 00:38:35.697 )") 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.697 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:35.698 { 00:38:35.698 "params": { 00:38:35.698 "name": "Nvme$subsystem", 00:38:35.698 "trtype": "$TEST_TRANSPORT", 00:38:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.698 "adrfam": "ipv4", 00:38:35.698 "trsvcid": "$NVMF_PORT", 00:38:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.698 "hdgst": ${hdgst:-false}, 00:38:35.698 "ddgst": ${ddgst:-false} 00:38:35.698 }, 00:38:35.698 "method": "bdev_nvme_attach_controller" 00:38:35.698 } 00:38:35.698 EOF 00:38:35.698 )") 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:35.698 { 00:38:35.698 "params": { 00:38:35.698 "name": "Nvme$subsystem", 00:38:35.698 "trtype": "$TEST_TRANSPORT", 00:38:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.698 "adrfam": "ipv4", 00:38:35.698 "trsvcid": "$NVMF_PORT", 00:38:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.698 "hdgst": ${hdgst:-false}, 00:38:35.698 "ddgst": ${ddgst:-false} 00:38:35.698 }, 00:38:35.698 "method": "bdev_nvme_attach_controller" 00:38:35.698 } 00:38:35.698 EOF 00:38:35.698 )") 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:35.698 "params": { 00:38:35.698 "name": "Nvme0", 00:38:35.698 "trtype": "tcp", 00:38:35.698 "traddr": "10.0.0.2", 00:38:35.698 "adrfam": "ipv4", 00:38:35.698 "trsvcid": "4420", 00:38:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.698 "hdgst": false, 00:38:35.698 "ddgst": false 00:38:35.698 }, 00:38:35.698 "method": "bdev_nvme_attach_controller" 00:38:35.698 },{ 00:38:35.698 "params": { 00:38:35.698 "name": "Nvme1", 00:38:35.698 "trtype": "tcp", 00:38:35.698 "traddr": "10.0.0.2", 00:38:35.698 "adrfam": "ipv4", 00:38:35.698 "trsvcid": "4420", 00:38:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.698 "hdgst": false, 00:38:35.698 "ddgst": false 00:38:35.698 }, 00:38:35.698 "method": "bdev_nvme_attach_controller" 00:38:35.698 },{ 00:38:35.698 "params": { 00:38:35.698 "name": "Nvme2", 00:38:35.698 "trtype": "tcp", 00:38:35.698 "traddr": "10.0.0.2", 00:38:35.698 "adrfam": "ipv4", 00:38:35.698 "trsvcid": "4420", 00:38:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:35.698 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:35.698 "hdgst": false, 00:38:35.698 "ddgst": false 00:38:35.698 }, 00:38:35.698 "method": "bdev_nvme_attach_controller" 00:38:35.698 }' 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:35.698 08:34:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.954 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.954 ... 00:38:35.954 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.954 ... 00:38:35.954 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.954 ... 00:38:35.954 fio-3.35 00:38:35.954 Starting 24 threads 00:38:35.954 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.145 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97744: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10092msec) 00:38:48.145 slat (nsec): min=8445, max=87108, avg=27974.55, stdev=11454.07 00:38:48.145 clat (msec): min=123, max=334, avg=224.04, stdev=37.22 00:38:48.145 lat (msec): min=123, max=334, avg=224.06, stdev=37.22 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 124], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 182], 00:38:48.145 | 30.00th=[ 201], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:38:48.145 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 275], 00:38:48.145 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:38:48.145 | 99.99th=[ 334] 00:38:48.145 bw ( KiB/s): min= 240, max= 384, per=4.20%, avg=281.60, stdev=52.79, samples=20 00:38:48.145 iops : min= 60, max= 96, avg=70.40, stdev=13.20, samples=20 00:38:48.145 lat (msec) : 250=83.61%, 500=16.39% 00:38:48.145 cpu : usr=98.21%, sys=1.36%, ctx=13, majf=0, minf=14 00:38:48.145 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97745: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10006msec) 00:38:48.145 slat (nsec): min=8697, max=86787, avg=22749.02, stdev=17137.05 00:38:48.145 clat (msec): min=177, max=355, avg=249.94, stdev=21.76 00:38:48.145 lat (msec): min=177, max=355, avg=249.96, stdev=21.76 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 232], 20.00th=[ 241], 00:38:48.145 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.145 | 70.00th=[ 255], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 296], 00:38:48.145 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:38:48.145 | 99.99th=[ 355] 00:38:48.145 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=249.60, stdev=65.95, samples=20 00:38:48.145 iops : min= 32, max= 96, avg=62.40, stdev=16.49, samples=20 00:38:48.145 lat (msec) : 250=65.00%, 500=35.00% 00:38:48.145 cpu : usr=98.25%, sys=1.33%, ctx=18, majf=0, minf=21 00:38:48.145 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97746: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10010msec) 00:38:48.145 slat (usec): min=13, max=114, avg=68.16, stdev=16.54 00:38:48.145 clat (msec): min=124, max=384, avg=249.67, stdev=31.05 00:38:48.145 lat (msec): min=124, max=384, avg=249.73, stdev=31.06 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 171], 5.00th=[ 186], 10.00th=[ 215], 20.00th=[ 241], 00:38:48.145 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.145 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 296], 95.00th=[ 296], 00:38:48.145 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 384], 99.95th=[ 384], 00:38:48.145 | 99.99th=[ 384] 00:38:48.145 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=249.60, stdev=77.42, samples=20 00:38:48.145 iops : min= 32, max= 96, avg=62.40, stdev=19.35, samples=20 00:38:48.145 lat (msec) : 250=63.59%, 500=36.41% 00:38:48.145 cpu : usr=98.01%, sys=1.39%, ctx=14, majf=0, minf=19 00:38:48.145 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97747: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10099msec) 00:38:48.145 slat (usec): min=7, max=115, avg=66.53, stdev=17.90 00:38:48.145 clat (msec): min=88, max=384, avg=239.90, stdev=38.23 00:38:48.145 lat (msec): min=88, max=385, avg=239.96, stdev=38.24 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 89], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 226], 00:38:48.145 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:38:48.145 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 284], 00:38:48.145 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:38:48.145 | 99.99th=[ 384] 00:38:48.145 bw ( KiB/s): min= 240, max= 368, per=3.92%, avg=262.40, stdev=26.16, samples=20 00:38:48.145 iops : min= 60, max= 92, avg=65.60, stdev= 6.54, samples=20 00:38:48.145 lat (msec) : 100=2.38%, 250=64.73%, 500=32.89% 00:38:48.145 cpu : usr=97.18%, sys=1.89%, ctx=39, majf=0, minf=14 00:38:48.145 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97748: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=68, BW=273KiB/s (279kB/s)(2752KiB/10084msec) 00:38:48.145 slat (usec): min=8, max=102, avg=33.46, stdev=20.65 00:38:48.145 clat (msec): min=145, max=374, avg=234.22, stdev=33.94 00:38:48.145 lat (msec): min=145, max=374, avg=234.25, stdev=33.95 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 157], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 205], 00:38:48.145 | 30.00th=[ 226], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:38:48.145 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 275], 00:38:48.145 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 376], 00:38:48.145 | 99.99th=[ 376] 00:38:48.145 bw ( KiB/s): min= 256, max= 368, per=4.01%, avg=268.80, stdev=34.28, samples=20 00:38:48.145 iops : min= 64, max= 92, avg=67.20, stdev= 8.57, samples=20 00:38:48.145 lat (msec) : 250=75.15%, 500=24.85% 00:38:48.145 cpu : usr=97.52%, sys=1.72%, ctx=171, majf=0, minf=27 00:38:48.145 IO depths : 1=3.1%, 2=9.2%, 4=24.6%, 8=53.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97749: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=95, BW=381KiB/s (390kB/s)(3848KiB/10091msec) 00:38:48.145 slat (nsec): min=7495, max=87140, avg=10968.96, stdev=5056.61 00:38:48.145 clat (msec): min=86, max=292, avg=167.33, stdev=32.22 00:38:48.145 lat (msec): min=86, max=292, avg=167.34, stdev=32.22 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 88], 5.00th=[ 111], 10.00th=[ 136], 20.00th=[ 148], 00:38:48.145 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 176], 00:38:48.145 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 230], 00:38:48.145 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 292], 99.95th=[ 292], 00:38:48.145 | 99.99th=[ 292] 00:38:48.145 bw ( KiB/s): min= 304, max= 496, per=5.66%, avg=378.40, stdev=48.22, samples=20 00:38:48.145 iops : min= 76, max= 124, avg=94.60, stdev=12.05, samples=20 00:38:48.145 lat (msec) : 100=1.66%, 250=94.59%, 500=3.74% 00:38:48.145 cpu : usr=98.42%, sys=1.20%, ctx=20, majf=0, minf=24 00:38:48.145 IO depths : 1=0.4%, 2=1.0%, 4=7.7%, 8=78.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=89.1%, 8=5.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97750: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=69, BW=279KiB/s (286kB/s)(2816KiB/10092msec) 00:38:48.145 slat (nsec): min=8442, max=55812, avg=25870.66, stdev=7239.64 00:38:48.145 clat (msec): min=146, max=279, avg=229.12, stdev=31.71 00:38:48.145 lat (msec): min=147, max=279, avg=229.15, stdev=31.71 00:38:48.145 clat percentiles (msec): 00:38:48.145 | 1.00th=[ 148], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 190], 00:38:48.145 | 30.00th=[ 224], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:38:48.145 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 259], 95.00th=[ 271], 00:38:48.145 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:38:48.145 | 99.99th=[ 279] 00:38:48.145 bw ( KiB/s): min= 256, max= 384, per=4.11%, avg=275.20, stdev=46.89, samples=20 00:38:48.145 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:38:48.145 lat (msec) : 250=81.25%, 500=18.75% 00:38:48.145 cpu : usr=98.20%, sys=1.41%, ctx=21, majf=0, minf=22 00:38:48.145 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:48.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.145 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.145 filename0: (groupid=0, jobs=1): err= 0: pid=97751: Sun Jul 21 08:34:56 2024 00:38:48.145 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10012msec) 00:38:48.145 slat (usec): min=26, max=107, avg=72.45, stdev=11.65 00:38:48.146 clat (msec): min=167, max=310, avg=249.65, stdev=20.35 00:38:48.146 lat (msec): min=167, max=310, avg=249.73, stdev=20.35 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 213], 5.00th=[ 215], 10.00th=[ 232], 20.00th=[ 241], 00:38:48.146 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 255], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 296], 00:38:48.146 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:38:48.146 | 99.99th=[ 309] 00:38:48.146 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=249.60, stdev=77.59, samples=20 00:38:48.146 iops : min= 32, max= 96, avg=62.40, stdev=19.40, samples=20 00:38:48.146 lat (msec) : 250=64.84%, 500=35.16% 00:38:48.146 cpu : usr=98.04%, sys=1.53%, ctx=10, majf=0, minf=19 00:38:48.146 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97752: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10100msec) 00:38:48.146 slat (usec): min=7, max=102, avg=70.21, stdev=14.04 00:38:48.146 clat (msec): min=129, max=400, avg=245.74, stdev=35.60 00:38:48.146 lat (msec): min=129, max=400, avg=245.81, stdev=35.61 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 146], 5.00th=[ 176], 10.00th=[ 207], 20.00th=[ 236], 00:38:48.146 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 255], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 305], 00:38:48.146 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 401], 99.95th=[ 401], 00:38:48.146 | 99.99th=[ 401] 00:38:48.146 bw ( KiB/s): min= 256, max= 256, per=3.83%, avg=256.00, stdev= 0.00, samples=20 00:38:48.146 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=20 00:38:48.146 lat (msec) : 250=64.79%, 500=35.21% 00:38:48.146 cpu : usr=97.81%, sys=1.55%, ctx=48, majf=0, minf=22 00:38:48.146 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97753: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10099msec) 00:38:48.146 slat (usec): min=13, max=123, avg=66.56, stdev=16.30 00:38:48.146 clat (msec): min=125, max=380, avg=245.61, stdev=36.80 00:38:48.146 lat (msec): min=125, max=380, avg=245.67, stdev=36.80 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 129], 5.00th=[ 184], 10.00th=[ 201], 20.00th=[ 239], 00:38:48.146 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 284], 95.00th=[ 300], 00:38:48.146 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:38:48.146 | 99.99th=[ 380] 00:38:48.146 bw ( KiB/s): min= 256, max= 256, per=3.83%, avg=256.00, stdev= 0.00, samples=20 00:38:48.146 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=20 00:38:48.146 lat (msec) : 250=64.63%, 500=35.37% 00:38:48.146 cpu : usr=97.58%, sys=1.68%, ctx=51, majf=0, minf=25 00:38:48.146 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97754: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10086msec) 00:38:48.146 slat (nsec): min=5892, max=91431, avg=26234.26, stdev=17564.98 00:38:48.146 clat (msec): min=128, max=363, avg=245.77, stdev=30.70 00:38:48.146 lat (msec): min=128, max=363, avg=245.80, stdev=30.70 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 155], 5.00th=[ 176], 10.00th=[ 215], 20.00th=[ 239], 00:38:48.146 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 255], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 288], 00:38:48.146 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:38:48.146 | 99.99th=[ 363] 00:38:48.146 bw ( KiB/s): min= 128, max= 368, per=3.83%, avg=256.00, stdev=39.87, samples=20 00:38:48.146 iops : min= 32, max= 92, avg=64.00, stdev= 9.97, samples=20 00:38:48.146 lat (msec) : 250=62.50%, 500=37.50% 00:38:48.146 cpu : usr=98.08%, sys=1.43%, ctx=47, majf=0, minf=17 00:38:48.146 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97755: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10091msec) 00:38:48.146 slat (usec): min=6, max=132, avg=66.25, stdev=20.06 00:38:48.146 clat (msec): min=85, max=379, avg=239.73, stdev=41.51 00:38:48.146 lat (msec): min=85, max=379, avg=239.80, stdev=41.52 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 86], 5.00th=[ 126], 10.00th=[ 215], 20.00th=[ 239], 00:38:48.146 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 284], 00:38:48.146 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:38:48.146 | 99.99th=[ 380] 00:38:48.146 bw ( KiB/s): min= 256, max= 384, per=3.92%, avg=262.40, stdev=28.62, samples=20 00:38:48.146 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:38:48.146 lat (msec) : 100=2.38%, 250=65.33%, 500=32.29% 00:38:48.146 cpu : usr=97.94%, sys=1.46%, ctx=30, majf=0, minf=33 00:38:48.146 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97756: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=72, BW=290KiB/s (297kB/s)(2928KiB/10099msec) 00:38:48.146 slat (usec): min=12, max=127, avg=24.90, stdev=14.90 00:38:48.146 clat (msec): min=88, max=331, avg=220.07, stdev=42.25 00:38:48.146 lat (msec): min=88, max=331, avg=220.10, stdev=42.25 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 89], 5.00th=[ 142], 10.00th=[ 176], 20.00th=[ 180], 00:38:48.146 | 30.00th=[ 190], 40.00th=[ 224], 50.00th=[ 241], 60.00th=[ 243], 00:38:48.146 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 275], 00:38:48.146 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 330], 00:38:48.146 | 99.99th=[ 330] 00:38:48.146 bw ( KiB/s): min= 240, max= 384, per=4.28%, avg=286.40, stdev=52.91, samples=20 00:38:48.146 iops : min= 60, max= 96, avg=71.60, stdev=13.23, samples=20 00:38:48.146 lat (msec) : 100=2.19%, 250=80.05%, 500=17.76% 00:38:48.146 cpu : usr=96.21%, sys=2.53%, ctx=121, majf=0, minf=20 00:38:48.146 IO depths : 1=3.7%, 2=9.0%, 4=22.1%, 8=56.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97757: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10007msec) 00:38:48.146 slat (usec): min=26, max=108, avg=74.08, stdev=11.56 00:38:48.146 clat (msec): min=177, max=354, avg=249.60, stdev=21.23 00:38:48.146 lat (msec): min=177, max=354, avg=249.67, stdev=21.23 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 232], 20.00th=[ 241], 00:38:48.146 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 253], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 296], 00:38:48.146 | 99.00th=[ 305], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:38:48.146 | 99.99th=[ 355] 00:38:48.146 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=249.60, stdev=61.07, samples=20 00:38:48.146 iops : min= 32, max= 96, avg=62.40, stdev=15.27, samples=20 00:38:48.146 lat (msec) : 250=66.09%, 500=33.91% 00:38:48.146 cpu : usr=98.12%, sys=1.41%, ctx=17, majf=0, minf=23 00:38:48.146 IO depths : 1=1.9%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97758: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=65, BW=260KiB/s (267kB/s)(2624KiB/10081msec) 00:38:48.146 slat (nsec): min=8173, max=97735, avg=29991.40, stdev=21589.95 00:38:48.146 clat (msec): min=124, max=365, avg=245.61, stdev=36.86 00:38:48.146 lat (msec): min=124, max=365, avg=245.64, stdev=36.86 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 128], 5.00th=[ 174], 10.00th=[ 205], 20.00th=[ 239], 00:38:48.146 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.146 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 284], 95.00th=[ 305], 00:38:48.146 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:38:48.146 | 99.99th=[ 368] 00:38:48.146 bw ( KiB/s): min= 128, max= 368, per=3.83%, avg=256.00, stdev=39.19, samples=20 00:38:48.146 iops : min= 32, max= 92, avg=64.00, stdev= 9.80, samples=20 00:38:48.146 lat (msec) : 250=64.02%, 500=35.98% 00:38:48.146 cpu : usr=98.12%, sys=1.46%, ctx=14, majf=0, minf=18 00:38:48.146 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:48.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.146 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.146 filename1: (groupid=0, jobs=1): err= 0: pid=97759: Sun Jul 21 08:34:56 2024 00:38:48.146 read: IOPS=68, BW=273KiB/s (279kB/s)(2752KiB/10092msec) 00:38:48.146 slat (nsec): min=8457, max=96888, avg=25930.30, stdev=11326.85 00:38:48.146 clat (msec): min=148, max=330, avg=233.07, stdev=32.01 00:38:48.146 lat (msec): min=148, max=330, avg=233.10, stdev=32.01 00:38:48.146 clat percentiles (msec): 00:38:48.146 | 1.00th=[ 148], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 194], 00:38:48.147 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:38:48.147 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 275], 00:38:48.147 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 330], 00:38:48.147 | 99.99th=[ 330] 00:38:48.147 bw ( KiB/s): min= 240, max= 384, per=4.01%, avg=268.80, stdev=38.00, samples=20 00:38:48.147 iops : min= 60, max= 96, avg=67.20, stdev= 9.50, samples=20 00:38:48.147 lat (msec) : 250=77.33%, 500=22.67% 00:38:48.147 cpu : usr=97.70%, sys=1.64%, ctx=46, majf=0, minf=14 00:38:48.147 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97760: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10090msec) 00:38:48.147 slat (usec): min=8, max=104, avg=68.88, stdev=13.58 00:38:48.147 clat (msec): min=126, max=384, avg=246.04, stdev=34.34 00:38:48.147 lat (msec): min=126, max=384, avg=246.11, stdev=34.35 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 209], 20.00th=[ 236], 00:38:48.147 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.147 | 70.00th=[ 253], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 309], 00:38:48.147 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:38:48.147 | 99.99th=[ 384] 00:38:48.147 bw ( KiB/s): min= 240, max= 272, per=3.82%, avg=255.20, stdev= 8.17, samples=20 00:38:48.147 iops : min= 60, max= 68, avg=63.80, stdev= 2.04, samples=20 00:38:48.147 lat (msec) : 250=64.83%, 500=35.17% 00:38:48.147 cpu : usr=97.47%, sys=1.73%, ctx=45, majf=0, minf=13 00:38:48.147 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97761: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10075msec) 00:38:48.147 slat (usec): min=13, max=114, avg=72.25, stdev=13.11 00:38:48.147 clat (msec): min=139, max=353, avg=249.78, stdev=30.28 00:38:48.147 lat (msec): min=139, max=353, avg=249.85, stdev=30.28 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 215], 20.00th=[ 239], 00:38:48.147 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.147 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 296], 95.00th=[ 305], 00:38:48.147 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:38:48.147 | 99.99th=[ 355] 00:38:48.147 bw ( KiB/s): min= 128, max= 368, per=3.73%, avg=249.60, stdev=59.05, samples=20 00:38:48.147 iops : min= 32, max= 92, avg=62.40, stdev=14.76, samples=20 00:38:48.147 lat (msec) : 250=64.06%, 500=35.94% 00:38:48.147 cpu : usr=97.78%, sys=1.58%, ctx=19, majf=0, minf=25 00:38:48.147 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97762: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10099msec) 00:38:48.147 slat (usec): min=13, max=114, avg=71.14, stdev=13.94 00:38:48.147 clat (msec): min=129, max=362, avg=246.25, stdev=28.27 00:38:48.147 lat (msec): min=129, max=362, avg=246.33, stdev=28.27 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 165], 5.00th=[ 207], 10.00th=[ 222], 20.00th=[ 239], 00:38:48.147 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:38:48.147 | 70.00th=[ 253], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:38:48.147 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:38:48.147 | 99.99th=[ 363] 00:38:48.147 bw ( KiB/s): min= 240, max= 272, per=3.82%, avg=255.20, stdev= 6.30, samples=20 00:38:48.147 iops : min= 60, max= 68, avg=63.80, stdev= 1.58, samples=20 00:38:48.147 lat (msec) : 250=65.60%, 500=34.40% 00:38:48.147 cpu : usr=97.97%, sys=1.47%, ctx=47, majf=0, minf=19 00:38:48.147 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97763: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10099msec) 00:38:48.147 slat (nsec): min=8659, max=90923, avg=29283.81, stdev=11796.75 00:38:48.147 clat (msec): min=110, max=334, avg=224.12, stdev=37.09 00:38:48.147 lat (msec): min=110, max=334, avg=224.15, stdev=37.09 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 124], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 182], 00:38:48.147 | 30.00th=[ 201], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 243], 00:38:48.147 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 259], 95.00th=[ 275], 00:38:48.147 | 99.00th=[ 279], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 334], 00:38:48.147 | 99.99th=[ 334] 00:38:48.147 bw ( KiB/s): min= 256, max= 384, per=4.20%, avg=281.60, stdev=52.53, samples=20 00:38:48.147 iops : min= 64, max= 96, avg=70.40, stdev=13.13, samples=20 00:38:48.147 lat (msec) : 250=83.33%, 500=16.67% 00:38:48.147 cpu : usr=98.00%, sys=1.44%, ctx=52, majf=0, minf=15 00:38:48.147 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97764: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=90, BW=360KiB/s (369kB/s)(3640KiB/10099msec) 00:38:48.147 slat (usec): min=7, max=111, avg=14.99, stdev=14.51 00:38:48.147 clat (msec): min=88, max=289, avg=177.22, stdev=35.43 00:38:48.147 lat (msec): min=88, max=289, avg=177.23, stdev=35.43 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 89], 5.00th=[ 131], 10.00th=[ 146], 20.00th=[ 153], 00:38:48.147 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 171], 60.00th=[ 180], 00:38:48.147 | 70.00th=[ 182], 80.00th=[ 205], 90.00th=[ 239], 95.00th=[ 245], 00:38:48.147 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:38:48.147 | 99.99th=[ 292] 00:38:48.147 bw ( KiB/s): min= 256, max= 384, per=5.34%, avg=357.60, stdev=31.69, samples=20 00:38:48.147 iops : min= 64, max= 96, avg=89.40, stdev= 7.92, samples=20 00:38:48.147 lat (msec) : 100=1.76%, 250=94.29%, 500=3.96% 00:38:48.147 cpu : usr=97.97%, sys=1.38%, ctx=46, majf=0, minf=26 00:38:48.147 IO depths : 1=0.4%, 2=1.6%, 4=9.0%, 8=76.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=89.4%, 8=5.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97765: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10083msec) 00:38:48.147 slat (nsec): min=8972, max=93088, avg=26863.77, stdev=16355.08 00:38:48.147 clat (msec): min=141, max=384, avg=251.67, stdev=27.58 00:38:48.147 lat (msec): min=141, max=384, avg=251.69, stdev=27.58 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 215], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 241], 00:38:48.147 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 251], 00:38:48.147 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 288], 00:38:48.147 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:38:48.147 | 99.99th=[ 384] 00:38:48.147 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=249.60, stdev=50.44, samples=20 00:38:48.147 iops : min= 32, max= 96, avg=62.40, stdev=12.61, samples=20 00:38:48.147 lat (msec) : 250=60.00%, 500=40.00% 00:38:48.147 cpu : usr=98.20%, sys=1.32%, ctx=28, majf=0, minf=18 00:38:48.147 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97766: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10094msec) 00:38:48.147 slat (nsec): min=3970, max=59316, avg=18105.13, stdev=8233.79 00:38:48.147 clat (msec): min=124, max=381, avg=240.16, stdev=43.82 00:38:48.147 lat (msec): min=124, max=381, avg=240.18, stdev=43.83 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 128], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 222], 00:38:48.147 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:38:48.147 | 70.00th=[ 251], 80.00th=[ 257], 90.00th=[ 284], 95.00th=[ 305], 00:38:48.147 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:38:48.147 | 99.99th=[ 380] 00:38:48.147 bw ( KiB/s): min= 240, max= 384, per=3.92%, avg=262.40, stdev=29.09, samples=20 00:38:48.147 iops : min= 60, max= 96, avg=65.60, stdev= 7.27, samples=20 00:38:48.147 lat (msec) : 250=68.30%, 500=31.70% 00:38:48.147 cpu : usr=97.47%, sys=1.77%, ctx=29, majf=0, minf=21 00:38:48.147 IO depths : 1=3.3%, 2=9.4%, 4=24.6%, 8=53.6%, 16=9.2%, 32=0.0%, >=64=0.0% 00:38:48.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.147 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.147 filename2: (groupid=0, jobs=1): err= 0: pid=97767: Sun Jul 21 08:34:56 2024 00:38:48.147 read: IOPS=95, BW=383KiB/s (392kB/s)(3872KiB/10102msec) 00:38:48.147 slat (nsec): min=7327, max=32041, avg=10379.23, stdev=3051.34 00:38:48.147 clat (msec): min=87, max=273, avg=166.47, stdev=30.39 00:38:48.147 lat (msec): min=87, max=273, avg=166.48, stdev=30.39 00:38:48.147 clat percentiles (msec): 00:38:48.147 | 1.00th=[ 88], 5.00th=[ 111], 10.00th=[ 136], 20.00th=[ 148], 00:38:48.147 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 174], 00:38:48.147 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 228], 00:38:48.147 | 99.00th=[ 266], 99.50th=[ 268], 99.90th=[ 275], 99.95th=[ 275], 00:38:48.147 | 99.99th=[ 275] 00:38:48.147 bw ( KiB/s): min= 304, max= 512, per=5.69%, avg=380.80, stdev=42.68, samples=20 00:38:48.148 iops : min= 76, max= 128, avg=95.20, stdev=10.67, samples=20 00:38:48.148 lat (msec) : 100=1.65%, 250=97.11%, 500=1.24% 00:38:48.148 cpu : usr=98.10%, sys=1.52%, ctx=13, majf=0, minf=19 00:38:48.148 IO depths : 1=1.0%, 2=2.3%, 4=9.5%, 8=75.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:38:48.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.148 complete : 0=0.0%, 4=89.6%, 8=5.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.148 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:48.148 00:38:48.148 Run status group 0 (all jobs): 00:38:48.148 READ: bw=6684KiB/s (6844kB/s), 254KiB/s-383KiB/s (260kB/s-392kB/s), io=65.9MiB (69.1MB), run=10006-10102msec 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 bdev_null0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 [2024-07-21 08:34:56.510833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 bdev_null1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:48.148 { 00:38:48.148 "params": { 00:38:48.148 "name": "Nvme$subsystem", 00:38:48.148 "trtype": "$TEST_TRANSPORT", 00:38:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.148 "adrfam": "ipv4", 00:38:48.148 "trsvcid": "$NVMF_PORT", 00:38:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.148 "hdgst": ${hdgst:-false}, 00:38:48.148 "ddgst": ${ddgst:-false} 00:38:48.148 }, 00:38:48.148 "method": "bdev_nvme_attach_controller" 00:38:48.148 } 00:38:48.148 EOF 00:38:48.148 )") 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:48.148 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:48.148 { 00:38:48.148 "params": { 00:38:48.148 "name": "Nvme$subsystem", 00:38:48.148 "trtype": "$TEST_TRANSPORT", 00:38:48.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.149 "adrfam": "ipv4", 00:38:48.149 "trsvcid": "$NVMF_PORT", 00:38:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.149 "hdgst": ${hdgst:-false}, 00:38:48.149 "ddgst": ${ddgst:-false} 00:38:48.149 }, 00:38:48.149 "method": "bdev_nvme_attach_controller" 00:38:48.149 } 00:38:48.149 EOF 00:38:48.149 )") 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:48.149 "params": { 00:38:48.149 "name": "Nvme0", 00:38:48.149 "trtype": "tcp", 00:38:48.149 "traddr": "10.0.0.2", 00:38:48.149 "adrfam": "ipv4", 00:38:48.149 "trsvcid": "4420", 00:38:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:48.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:48.149 "hdgst": false, 00:38:48.149 "ddgst": false 00:38:48.149 }, 00:38:48.149 "method": "bdev_nvme_attach_controller" 00:38:48.149 },{ 00:38:48.149 "params": { 00:38:48.149 "name": "Nvme1", 00:38:48.149 "trtype": "tcp", 00:38:48.149 "traddr": "10.0.0.2", 00:38:48.149 "adrfam": "ipv4", 00:38:48.149 "trsvcid": "4420", 00:38:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.149 "hdgst": false, 00:38:48.149 "ddgst": false 00:38:48.149 }, 00:38:48.149 "method": "bdev_nvme_attach_controller" 00:38:48.149 }' 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:48.149 08:34:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:48.149 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:48.149 ... 00:38:48.149 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:48.149 ... 00:38:48.149 fio-3.35 00:38:48.149 Starting 4 threads 00:38:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.405 00:38:53.405 filename0: (groupid=0, jobs=1): err= 0: pid=99152: Sun Jul 21 08:35:02 2024 00:38:53.405 read: IOPS=1867, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5003msec) 00:38:53.405 slat (nsec): min=3994, max=54801, avg=11774.93, stdev=4881.10 00:38:53.405 clat (usec): min=1007, max=8021, avg=4242.44, stdev=621.66 00:38:53.405 lat (usec): min=1020, max=8036, avg=4254.22, stdev=621.77 00:38:53.405 clat percentiles (usec): 00:38:53.405 | 1.00th=[ 2507], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3818], 00:38:53.405 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:38:53.405 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5080], 00:38:53.405 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 7767], 00:38:53.405 | 99.99th=[ 8029] 00:38:53.405 bw ( KiB/s): min=14064, max=15952, per=25.99%, avg=14942.00, stdev=701.97, samples=10 00:38:53.405 iops : min= 1758, max= 1994, avg=1867.70, stdev=87.81, samples=10 00:38:53.405 lat (msec) : 2=0.34%, 4=25.92%, 10=73.74% 00:38:53.405 cpu : usr=91.36%, sys=7.98%, ctx=16, majf=0, minf=0 00:38:53.405 IO depths : 1=0.3%, 2=11.9%, 4=59.7%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 issued rwts: total=9345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:53.405 filename0: (groupid=0, jobs=1): err= 0: pid=99153: Sun Jul 21 08:35:02 2024 00:38:53.405 read: IOPS=1757, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5001msec) 00:38:53.405 slat (nsec): min=5584, max=59988, avg=12519.43, stdev=5305.60 00:38:53.405 clat (usec): min=751, max=8504, avg=4508.81, stdev=759.75 00:38:53.405 lat (usec): min=779, max=8518, avg=4521.33, stdev=759.38 00:38:53.405 clat percentiles (usec): 00:38:53.405 | 1.00th=[ 2442], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4146], 00:38:53.405 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:38:53.405 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 5997], 00:38:53.405 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 8455], 00:38:53.405 | 99.99th=[ 8455] 00:38:53.405 bw ( KiB/s): min=12560, max=14832, per=24.37%, avg=14014.22, stdev=669.01, samples=9 00:38:53.405 iops : min= 1570, max= 1854, avg=1751.78, stdev=83.63, samples=9 00:38:53.405 lat (usec) : 1000=0.05% 00:38:53.405 lat (msec) : 2=0.76%, 4=12.05%, 10=87.14% 00:38:53.405 cpu : usr=92.92%, sys=6.38%, ctx=21, majf=0, minf=9 00:38:53.405 IO depths : 1=0.1%, 2=11.9%, 4=60.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 issued rwts: total=8787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:53.405 filename1: (groupid=0, jobs=1): err= 0: pid=99154: Sun Jul 21 08:35:02 2024 00:38:53.405 read: IOPS=1768, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5004msec) 00:38:53.405 slat (nsec): min=3829, max=53249, avg=12270.04, stdev=4836.46 00:38:53.405 clat (usec): min=876, max=8360, avg=4480.52, stdev=759.33 00:38:53.405 lat (usec): min=888, max=8374, avg=4492.79, stdev=759.12 00:38:53.405 clat percentiles (usec): 00:38:53.405 | 1.00th=[ 2802], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4113], 00:38:53.405 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:38:53.405 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 6063], 00:38:53.405 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 8029], 99.95th=[ 8160], 00:38:53.405 | 99.99th=[ 8356] 00:38:53.405 bw ( KiB/s): min=13680, max=14832, per=24.61%, avg=14148.80, stdev=413.60, samples=10 00:38:53.405 iops : min= 1710, max= 1854, avg=1768.60, stdev=51.70, samples=10 00:38:53.405 lat (usec) : 1000=0.03% 00:38:53.405 lat (msec) : 2=0.43%, 4=15.57%, 10=83.97% 00:38:53.405 cpu : usr=92.60%, sys=6.72%, ctx=11, majf=0, minf=9 00:38:53.405 IO depths : 1=0.2%, 2=11.4%, 4=60.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 issued rwts: total=8851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:53.405 filename1: (groupid=0, jobs=1): err= 0: pid=99155: Sun Jul 21 08:35:02 2024 00:38:53.405 read: IOPS=1796, BW=14.0MiB/s (14.7MB/s)(70.2MiB/5001msec) 00:38:53.405 slat (nsec): min=5396, max=56230, avg=12841.19, stdev=5300.14 00:38:53.405 clat (usec): min=1013, max=8278, avg=4409.54, stdev=692.11 00:38:53.405 lat (usec): min=1025, max=8293, avg=4422.38, stdev=691.93 00:38:53.405 clat percentiles (usec): 00:38:53.405 | 1.00th=[ 2442], 5.00th=[ 3425], 10.00th=[ 3752], 20.00th=[ 4080], 00:38:53.405 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:38:53.405 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5669], 00:38:53.405 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8029], 00:38:53.405 | 99.99th=[ 8291] 00:38:53.405 bw ( KiB/s): min=14000, max=14864, per=25.00%, avg=14375.11, stdev=316.59, samples=9 00:38:53.405 iops : min= 1750, max= 1858, avg=1796.89, stdev=39.57, samples=9 00:38:53.405 lat (msec) : 2=0.57%, 4=16.69%, 10=82.74% 00:38:53.405 cpu : usr=92.22%, sys=7.10%, ctx=10, majf=0, minf=9 00:38:53.405 IO depths : 1=0.1%, 2=12.6%, 4=59.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.405 issued rwts: total=8982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.405 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:53.405 00:38:53.405 Run status group 0 (all jobs): 00:38:53.405 READ: bw=56.2MiB/s (58.9MB/s), 13.7MiB/s-14.6MiB/s (14.4MB/s-15.3MB/s), io=281MiB (295MB), run=5001-5004msec 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:53.405 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 00:38:53.406 real 0m23.865s 00:38:53.406 user 4m33.535s 00:38:53.406 sys 0m7.055s 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 ************************************ 00:38:53.406 END TEST fio_dif_rand_params 00:38:53.406 ************************************ 00:38:53.406 08:35:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:53.406 08:35:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:53.406 08:35:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:53.406 08:35:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 ************************************ 00:38:53.406 START TEST fio_dif_digest 00:38:53.406 ************************************ 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 bdev_null0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.406 [2024-07-21 08:35:02.916725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:53.406 { 00:38:53.406 "params": { 00:38:53.406 "name": "Nvme$subsystem", 00:38:53.406 "trtype": "$TEST_TRANSPORT", 00:38:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:53.406 "adrfam": "ipv4", 00:38:53.406 "trsvcid": "$NVMF_PORT", 00:38:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:53.406 "hdgst": ${hdgst:-false}, 00:38:53.406 "ddgst": ${ddgst:-false} 00:38:53.406 }, 00:38:53.406 "method": "bdev_nvme_attach_controller" 00:38:53.406 } 00:38:53.406 EOF 00:38:53.406 )") 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:53.406 "params": { 00:38:53.406 "name": "Nvme0", 00:38:53.406 "trtype": "tcp", 00:38:53.406 "traddr": "10.0.0.2", 00:38:53.406 "adrfam": "ipv4", 00:38:53.406 "trsvcid": "4420", 00:38:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:53.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:53.406 "hdgst": true, 00:38:53.406 "ddgst": true 00:38:53.406 }, 00:38:53.406 "method": "bdev_nvme_attach_controller" 00:38:53.406 }' 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:53.406 08:35:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.664 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:53.664 ... 00:38:53.664 fio-3.35 00:38:53.664 Starting 3 threads 00:38:53.664 EAL: No free 2048 kB hugepages reported on node 1 00:39:05.869 00:39:05.869 filename0: (groupid=0, jobs=1): err= 0: pid=99909: Sun Jul 21 08:35:13 2024 00:39:05.869 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(258MiB/10045msec) 00:39:05.869 slat (nsec): min=5186, max=57346, avg=20516.76, stdev=4873.56 00:39:05.869 clat (usec): min=8949, max=55316, avg=14521.45, stdev=2521.33 00:39:05.869 lat (usec): min=8970, max=55331, avg=14541.97, stdev=2521.02 00:39:05.869 clat percentiles (usec): 00:39:05.869 | 1.00th=[11731], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:39:05.869 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:39:05.869 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:39:05.869 | 99.00th=[17171], 99.50th=[18482], 99.90th=[54789], 99.95th=[54789], 00:39:05.869 | 99.99th=[55313] 00:39:05.869 bw ( KiB/s): min=24064, max=28416, per=33.73%, avg=26432.00, stdev=1026.94, samples=20 00:39:05.869 iops : min= 188, max= 222, avg=206.50, stdev= 8.02, samples=20 00:39:05.869 lat (msec) : 10=0.39%, 20=99.18%, 50=0.15%, 100=0.29% 00:39:05.869 cpu : usr=93.77%, sys=5.65%, ctx=101, majf=0, minf=166 00:39:05.869 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.869 filename0: (groupid=0, jobs=1): err= 0: pid=99910: Sun Jul 21 08:35:13 2024 00:39:05.869 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(261MiB/10045msec) 00:39:05.869 slat (nsec): min=6158, max=94797, avg=16602.95, stdev=5236.52 00:39:05.869 clat (usec): min=7912, max=51944, avg=14419.03, stdev=1672.01 00:39:05.869 lat (usec): min=7924, max=51957, avg=14435.63, stdev=1671.99 00:39:05.869 clat percentiles (usec): 00:39:05.869 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:39:05.869 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:39:05.869 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:39:05.869 | 99.00th=[17171], 99.50th=[17695], 99.90th=[24773], 99.95th=[49021], 00:39:05.869 | 99.99th=[52167] 00:39:05.869 bw ( KiB/s): min=24832, max=27648, per=34.01%, avg=26649.60, stdev=769.79, samples=20 00:39:05.869 iops : min= 194, max= 216, avg=208.20, stdev= 6.01, samples=20 00:39:05.869 lat (msec) : 10=1.01%, 20=98.75%, 50=0.19%, 100=0.05% 00:39:05.869 cpu : usr=93.32%, sys=6.21%, ctx=32, majf=0, minf=204 00:39:05.869 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.869 filename0: (groupid=0, jobs=1): err= 0: pid=99911: Sun Jul 21 08:35:13 2024 00:39:05.869 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(250MiB/10003msec) 00:39:05.869 slat (nsec): min=6420, max=45297, avg=16321.92, stdev=4636.96 00:39:05.869 clat (usec): min=8934, max=56072, avg=14983.80, stdev=1979.63 00:39:05.869 lat (usec): min=8948, max=56086, avg=15000.12, stdev=1979.52 00:39:05.869 clat percentiles (usec): 00:39:05.869 | 1.00th=[11469], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:39:05.869 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:39:05.869 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:39:05.869 | 99.00th=[17695], 99.50th=[18220], 99.90th=[55313], 99.95th=[55837], 00:39:05.869 | 99.99th=[55837] 00:39:05.869 bw ( KiB/s): min=23086, max=26880, per=32.64%, avg=25576.70, stdev=947.50, samples=20 00:39:05.869 iops : min= 180, max= 210, avg=199.80, stdev= 7.45, samples=20 00:39:05.869 lat (msec) : 10=0.25%, 20=99.45%, 50=0.15%, 100=0.15% 00:39:05.869 cpu : usr=93.17%, sys=6.37%, ctx=25, majf=0, minf=106 00:39:05.869 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.869 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.869 00:39:05.869 Run status group 0 (all jobs): 00:39:05.869 READ: bw=76.5MiB/s (80.2MB/s), 25.0MiB/s-25.9MiB/s (26.2MB/s-27.2MB/s), io=769MiB (806MB), run=10003-10045msec 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:05.869 00:39:05.869 real 0m11.197s 00:39:05.869 user 0m29.321s 00:39:05.869 sys 0m2.085s 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:05.869 08:35:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.869 ************************************ 00:39:05.869 END TEST fio_dif_digest 00:39:05.869 ************************************ 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:05.869 08:35:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:05.869 08:35:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:05.869 rmmod nvme_tcp 00:39:05.869 rmmod nvme_fabrics 00:39:05.869 rmmod nvme_keyring 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 93988 ']' 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 93988 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 93988 ']' 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 93988 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93988 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93988' 00:39:05.869 killing process with pid 93988 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@967 -- # kill 93988 00:39:05.869 08:35:14 nvmf_dif -- common/autotest_common.sh@972 -- # wait 93988 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:05.869 08:35:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:05.869 Waiting for block devices as requested 00:39:06.129 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:06.129 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:06.387 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:06.387 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:06.387 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:06.387 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:06.645 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:06.645 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:06.645 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:06.645 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:06.903 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:06.903 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:06.903 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:06.903 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:07.162 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:07.162 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:07.162 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:07.162 08:35:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:07.162 08:35:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:07.162 08:35:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:07.162 08:35:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:07.162 08:35:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.162 08:35:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:07.162 08:35:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.717 08:35:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:09.717 00:39:09.717 real 1m5.953s 00:39:09.717 user 6m29.657s 00:39:09.717 sys 0m18.220s 00:39:09.717 08:35:18 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:09.717 08:35:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:09.717 ************************************ 00:39:09.717 END TEST nvmf_dif 00:39:09.717 ************************************ 00:39:09.717 08:35:18 -- common/autotest_common.sh@1142 -- # return 0 00:39:09.717 08:35:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.717 08:35:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:09.717 08:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:09.717 08:35:18 -- common/autotest_common.sh@10 -- # set +x 00:39:09.717 ************************************ 00:39:09.717 START TEST nvmf_abort_qd_sizes 00:39:09.717 ************************************ 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.717 * Looking for test storage... 00:39:09.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:09.717 08:35:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:11.090 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:11.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:11.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:11.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:11.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.091 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:11.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:39:11.350 00:39:11.350 --- 10.0.0.2 ping statistics --- 00:39:11.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.350 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:39:11.350 00:39:11.350 --- 10.0.0.1 ping statistics --- 00:39:11.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.350 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:11.350 08:35:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:12.282 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:12.541 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:12.541 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:13.474 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=104733 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 104733 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 104733 ']' 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:13.474 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:13.731 [2024-07-21 08:35:23.124230] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:39:13.731 [2024-07-21 08:35:23.124302] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.731 EAL: No free 2048 kB hugepages reported on node 1 00:39:13.731 [2024-07-21 08:35:23.192795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:13.731 [2024-07-21 08:35:23.285349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.731 [2024-07-21 08:35:23.285411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.731 [2024-07-21 08:35:23.285438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:13.731 [2024-07-21 08:35:23.285451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:13.731 [2024-07-21 08:35:23.285464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.731 [2024-07-21 08:35:23.285542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.731 [2024-07-21 08:35:23.285597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:13.731 [2024-07-21 08:35:23.285719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:13.731 [2024-07-21 08:35:23.285723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:13.989 08:35:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:13.989 ************************************ 00:39:13.989 START TEST spdk_target_abort 00:39:13.989 ************************************ 00:39:13.989 08:35:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:13.989 08:35:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:13.989 08:35:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:13.989 08:35:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.989 08:35:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 spdk_targetn1 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 [2024-07-21 08:35:26.287555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 [2024-07-21 08:35:26.319849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:17.269 08:35:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:17.269 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.552 Initializing NVMe Controllers 00:39:20.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.552 Initialization complete. Launching workers. 00:39:20.552 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12812, failed: 0 00:39:20.552 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 11595 00:39:20.552 success 723, unsuccess 494, failed 0 00:39:20.552 08:35:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.552 08:35:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.552 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.873 Initializing NVMe Controllers 00:39:23.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.873 Initialization complete. Launching workers. 00:39:23.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8670, failed: 0 00:39:23.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7434 00:39:23.873 success 320, unsuccess 916, failed 0 00:39:23.873 08:35:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:23.873 08:35:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.873 EAL: No free 2048 kB hugepages reported on node 1 00:39:27.148 Initializing NVMe Controllers 00:39:27.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:27.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:27.148 Initialization complete. Launching workers. 00:39:27.148 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31621, failed: 0 00:39:27.148 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2666, failed to submit 28955 00:39:27.148 success 486, unsuccess 2180, failed 0 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.148 08:35:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 104733 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 104733 ']' 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 104733 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104733 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104733' 00:39:28.083 killing process with pid 104733 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 104733 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 104733 00:39:28.083 00:39:28.083 real 0m14.235s 00:39:28.083 user 0m53.965s 00:39:28.083 sys 0m2.544s 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:28.083 08:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:28.083 ************************************ 00:39:28.083 END TEST spdk_target_abort 00:39:28.083 ************************************ 00:39:28.083 08:35:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:28.083 08:35:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:28.083 08:35:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:28.083 08:35:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:28.083 08:35:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:28.340 ************************************ 00:39:28.340 START TEST kernel_target_abort 00:39:28.340 ************************************ 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:28.340 08:35:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:29.270 Waiting for block devices as requested 00:39:29.270 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:29.530 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:29.530 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:29.789 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:29.789 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:29.789 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:29.789 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:30.047 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:30.047 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:30.047 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:30.047 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:30.305 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:30.305 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:30.305 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:30.305 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:30.562 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:30.562 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:30.820 No valid GPT data, bailing 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:39:30.820 00:39:30.820 Discovery Log Number of Records 2, Generation counter 2 00:39:30.820 =====Discovery Log Entry 0====== 00:39:30.820 trtype: tcp 00:39:30.820 adrfam: ipv4 00:39:30.820 subtype: current discovery subsystem 00:39:30.820 treq: not specified, sq flow control disable supported 00:39:30.820 portid: 1 00:39:30.820 trsvcid: 4420 00:39:30.820 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:30.820 traddr: 10.0.0.1 00:39:30.820 eflags: none 00:39:30.820 sectype: none 00:39:30.820 =====Discovery Log Entry 1====== 00:39:30.820 trtype: tcp 00:39:30.820 adrfam: ipv4 00:39:30.820 subtype: nvme subsystem 00:39:30.820 treq: not specified, sq flow control disable supported 00:39:30.820 portid: 1 00:39:30.820 trsvcid: 4420 00:39:30.820 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:30.820 traddr: 10.0.0.1 00:39:30.820 eflags: none 00:39:30.820 sectype: none 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:30.820 08:35:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:30.820 EAL: No free 2048 kB hugepages reported on node 1 00:39:34.119 Initializing NVMe Controllers 00:39:34.119 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:34.119 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:34.119 Initialization complete. Launching workers. 00:39:34.119 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44387, failed: 0 00:39:34.119 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44387, failed to submit 0 00:39:34.119 success 0, unsuccess 44387, failed 0 00:39:34.119 08:35:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:34.119 08:35:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:34.119 EAL: No free 2048 kB hugepages reported on node 1 00:39:37.400 Initializing NVMe Controllers 00:39:37.400 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:37.400 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:37.400 Initialization complete. Launching workers. 00:39:37.400 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79136, failed: 0 00:39:37.400 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19954, failed to submit 59182 00:39:37.400 success 0, unsuccess 19954, failed 0 00:39:37.400 08:35:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:37.400 08:35:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:37.400 EAL: No free 2048 kB hugepages reported on node 1 00:39:40.684 Initializing NVMe Controllers 00:39:40.684 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:40.684 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:40.684 Initialization complete. Launching workers. 00:39:40.684 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76904, failed: 0 00:39:40.684 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19214, failed to submit 57690 00:39:40.684 success 0, unsuccess 19214, failed 0 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:40.684 08:35:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:41.252 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.252 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.511 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.511 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:42.445 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:42.445 00:39:42.445 real 0m14.314s 00:39:42.445 user 0m6.039s 00:39:42.445 sys 0m3.318s 00:39:42.445 08:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:42.445 08:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.445 ************************************ 00:39:42.445 END TEST kernel_target_abort 00:39:42.445 ************************************ 00:39:42.445 08:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:42.445 08:35:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:42.445 08:35:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:42.445 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:42.445 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:42.446 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:42.446 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:42.446 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:42.446 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:42.446 rmmod nvme_tcp 00:39:42.706 rmmod nvme_fabrics 00:39:42.706 rmmod nvme_keyring 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 104733 ']' 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 104733 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 104733 ']' 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 104733 00:39:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (104733) - No such process 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 104733 is not found' 00:39:42.706 Process with pid 104733 is not found 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:42.706 08:35:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:43.647 Waiting for block devices as requested 00:39:43.647 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:43.904 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:43.904 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:44.161 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:44.161 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:44.161 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:44.161 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:44.419 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:44.419 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:44.419 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:44.419 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:44.678 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:44.678 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:44.678 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:44.678 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:44.937 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:44.937 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:44.937 08:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.462 08:35:56 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:47.462 00:39:47.462 real 0m37.704s 00:39:47.462 user 1m2.093s 00:39:47.462 sys 0m9.007s 00:39:47.462 08:35:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:47.462 08:35:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:47.462 ************************************ 00:39:47.462 END TEST nvmf_abort_qd_sizes 00:39:47.462 ************************************ 00:39:47.462 08:35:56 -- common/autotest_common.sh@1142 -- # return 0 00:39:47.462 08:35:56 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:47.462 08:35:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:47.462 08:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:47.462 08:35:56 -- common/autotest_common.sh@10 -- # set +x 00:39:47.462 ************************************ 00:39:47.462 START TEST keyring_file 00:39:47.462 ************************************ 00:39:47.462 08:35:56 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:47.462 * Looking for test storage... 00:39:47.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:47.462 08:35:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.462 08:35:56 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.462 08:35:56 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.462 08:35:56 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.462 08:35:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.462 08:35:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.462 08:35:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.462 08:35:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:47.462 08:35:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:47.462 08:35:56 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:47.462 08:35:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:47.462 08:35:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2j50p6JOUK 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2j50p6JOUK 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2j50p6JOUK 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2j50p6JOUK 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JtleE1P1Ow 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:47.463 08:35:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JtleE1P1Ow 00:39:47.463 08:35:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JtleE1P1Ow 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JtleE1P1Ow 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=110559 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:47.463 08:35:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 110559 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 110559 ']' 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:47.463 08:35:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:47.463 [2024-07-21 08:35:56.816847] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:39:47.463 [2024-07-21 08:35:56.816952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110559 ] 00:39:47.463 EAL: No free 2048 kB hugepages reported on node 1 00:39:47.463 [2024-07-21 08:35:56.874029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.463 [2024-07-21 08:35:56.962432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:47.721 08:35:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:47.721 [2024-07-21 08:35:57.225029] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.721 null0 00:39:47.721 [2024-07-21 08:35:57.257096] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:47.721 [2024-07-21 08:35:57.257564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:47.721 [2024-07-21 08:35:57.265096] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.721 08:35:57 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:47.721 [2024-07-21 08:35:57.273088] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:47.721 request: 00:39:47.721 { 00:39:47.721 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:47.721 "secure_channel": false, 00:39:47.721 "listen_address": { 00:39:47.721 "trtype": "tcp", 00:39:47.721 "traddr": "127.0.0.1", 00:39:47.721 "trsvcid": "4420" 00:39:47.721 }, 00:39:47.721 "method": "nvmf_subsystem_add_listener", 00:39:47.721 "req_id": 1 00:39:47.721 } 00:39:47.721 Got JSON-RPC error response 00:39:47.721 response: 00:39:47.721 { 00:39:47.721 "code": -32602, 00:39:47.721 "message": "Invalid parameters" 00:39:47.721 } 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:47.721 08:35:57 keyring_file -- keyring/file.sh@46 -- # bperfpid=110568 00:39:47.721 08:35:57 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:47.721 08:35:57 keyring_file -- keyring/file.sh@48 -- # waitforlisten 110568 /var/tmp/bperf.sock 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 110568 ']' 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:47.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:47.721 08:35:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:47.721 [2024-07-21 08:35:57.320789] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:39:47.721 [2024-07-21 08:35:57.320866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110568 ] 00:39:47.721 EAL: No free 2048 kB hugepages reported on node 1 00:39:47.978 [2024-07-21 08:35:57.378744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.978 [2024-07-21 08:35:57.464126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.978 08:35:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:47.978 08:35:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:47.978 08:35:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:47.978 08:35:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:48.236 08:35:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JtleE1P1Ow 00:39:48.236 08:35:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JtleE1P1Ow 00:39:48.493 08:35:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:48.493 08:35:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:48.493 08:35:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.493 08:35:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.493 08:35:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.751 08:35:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.2j50p6JOUK == \/\t\m\p\/\t\m\p\.\2\j\5\0\p\6\J\O\U\K ]] 00:39:48.751 08:35:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:48.751 08:35:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:48.751 08:35:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.751 08:35:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.751 08:35:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:49.008 08:35:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JtleE1P1Ow == \/\t\m\p\/\t\m\p\.\J\t\l\e\E\1\P\1\O\w ]] 00:39:49.008 08:35:58 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:49.009 08:35:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.009 08:35:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.009 08:35:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.009 08:35:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.009 08:35:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.267 08:35:58 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:49.267 08:35:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:49.267 08:35:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:49.267 08:35:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.267 08:35:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.267 08:35:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:49.267 08:35:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.524 08:35:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:49.524 08:35:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.524 08:35:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.782 [2024-07-21 08:35:59.320886] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:49.782 nvme0n1 00:39:50.040 08:35:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:50.040 08:35:59 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:50.040 08:35:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.040 08:35:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.298 08:35:59 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:50.298 08:35:59 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:50.557 Running I/O for 1 seconds... 00:39:51.489 00:39:51.489 Latency(us) 00:39:51.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.489 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:51.489 nvme0n1 : 1.01 8007.10 31.28 0.00 0.00 15909.79 8398.32 31651.46 00:39:51.489 =================================================================================================================== 00:39:51.489 Total : 8007.10 31.28 0.00 0.00 15909.79 8398.32 31651.46 00:39:51.489 0 00:39:51.489 08:36:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:51.489 08:36:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:51.745 08:36:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:51.745 08:36:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:51.745 08:36:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.745 08:36:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.745 08:36:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.745 08:36:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.002 08:36:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:52.002 08:36:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:52.002 08:36:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:52.002 08:36:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.002 08:36:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.002 08:36:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:52.002 08:36:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.258 08:36:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:52.258 08:36:01 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:52.258 08:36:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.258 08:36:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.515 [2024-07-21 08:36:02.087893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:52.515 [2024-07-21 08:36:02.088727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f68710 (107): Transport endpoint is not connected 00:39:52.515 [2024-07-21 08:36:02.089720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f68710 (9): Bad file descriptor 00:39:52.515 [2024-07-21 08:36:02.090718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:52.515 [2024-07-21 08:36:02.090736] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:52.515 [2024-07-21 08:36:02.090749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:52.515 request: 00:39:52.515 { 00:39:52.515 "name": "nvme0", 00:39:52.515 "trtype": "tcp", 00:39:52.515 "traddr": "127.0.0.1", 00:39:52.515 "adrfam": "ipv4", 00:39:52.515 "trsvcid": "4420", 00:39:52.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:52.515 "prchk_reftag": false, 00:39:52.515 "prchk_guard": false, 00:39:52.515 "hdgst": false, 00:39:52.515 "ddgst": false, 00:39:52.515 "psk": "key1", 00:39:52.515 "method": "bdev_nvme_attach_controller", 00:39:52.515 "req_id": 1 00:39:52.515 } 00:39:52.515 Got JSON-RPC error response 00:39:52.515 response: 00:39:52.515 { 00:39:52.515 "code": -5, 00:39:52.515 "message": "Input/output error" 00:39:52.515 } 00:39:52.515 08:36:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:52.515 08:36:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:52.515 08:36:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:52.515 08:36:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:52.515 08:36:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:52.515 08:36:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.515 08:36:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.515 08:36:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.515 08:36:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.515 08:36:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.772 08:36:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:52.772 08:36:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:52.772 08:36:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:52.772 08:36:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.772 08:36:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.772 08:36:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.772 08:36:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.030 08:36:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:53.030 08:36:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:53.030 08:36:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:53.287 08:36:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:53.287 08:36:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:53.553 08:36:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:53.553 08:36:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.553 08:36:03 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:53.815 08:36:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:53.815 08:36:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.2j50p6JOUK 00:39:53.815 08:36:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:53.815 08:36:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:53.815 08:36:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:54.071 [2024-07-21 08:36:03.628830] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2j50p6JOUK': 0100660 00:39:54.071 [2024-07-21 08:36:03.628865] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:54.071 request: 00:39:54.071 { 00:39:54.071 "name": "key0", 00:39:54.071 "path": "/tmp/tmp.2j50p6JOUK", 00:39:54.071 "method": "keyring_file_add_key", 00:39:54.071 "req_id": 1 00:39:54.071 } 00:39:54.071 Got JSON-RPC error response 00:39:54.071 response: 00:39:54.071 { 00:39:54.071 "code": -1, 00:39:54.071 "message": "Operation not permitted" 00:39:54.071 } 00:39:54.071 08:36:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:54.071 08:36:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:54.071 08:36:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:54.071 08:36:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:54.071 08:36:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.2j50p6JOUK 00:39:54.071 08:36:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:54.071 08:36:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2j50p6JOUK 00:39:54.328 08:36:03 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.2j50p6JOUK 00:39:54.328 08:36:03 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:54.328 08:36:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.328 08:36:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.328 08:36:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.328 08:36:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.328 08:36:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.585 08:36:04 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:54.585 08:36:04 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:54.585 08:36:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.585 08:36:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.843 [2024-07-21 08:36:04.402988] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2j50p6JOUK': No such file or directory 00:39:54.843 [2024-07-21 08:36:04.403022] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:54.843 [2024-07-21 08:36:04.403052] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:54.843 [2024-07-21 08:36:04.403065] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:54.843 [2024-07-21 08:36:04.403079] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:54.843 request: 00:39:54.843 { 00:39:54.843 "name": "nvme0", 00:39:54.843 "trtype": "tcp", 00:39:54.843 "traddr": "127.0.0.1", 00:39:54.843 "adrfam": "ipv4", 00:39:54.843 "trsvcid": "4420", 00:39:54.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:54.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:54.843 "prchk_reftag": false, 00:39:54.843 "prchk_guard": false, 00:39:54.843 "hdgst": false, 00:39:54.843 "ddgst": false, 00:39:54.843 "psk": "key0", 00:39:54.843 "method": "bdev_nvme_attach_controller", 00:39:54.843 "req_id": 1 00:39:54.843 } 00:39:54.843 Got JSON-RPC error response 00:39:54.843 response: 00:39:54.843 { 00:39:54.843 "code": -19, 00:39:54.843 "message": "No such device" 00:39:54.843 } 00:39:54.843 08:36:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:54.843 08:36:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:54.843 08:36:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:54.843 08:36:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:54.843 08:36:04 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:54.843 08:36:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:55.100 08:36:04 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:55.100 08:36:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Iq3xkdJcQM 00:39:55.101 08:36:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:55.101 08:36:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:55.359 08:36:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Iq3xkdJcQM 00:39:55.359 08:36:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Iq3xkdJcQM 00:39:55.359 08:36:04 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Iq3xkdJcQM 00:39:55.359 08:36:04 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Iq3xkdJcQM 00:39:55.359 08:36:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Iq3xkdJcQM 00:39:55.616 08:36:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.616 08:36:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.873 nvme0n1 00:39:55.873 08:36:05 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:55.873 08:36:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:55.873 08:36:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.873 08:36:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.873 08:36:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.873 08:36:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.130 08:36:05 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:56.130 08:36:05 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:56.130 08:36:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:56.389 08:36:05 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:56.389 08:36:05 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:56.389 08:36:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.389 08:36:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.389 08:36:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.681 08:36:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:56.681 08:36:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:56.681 08:36:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:56.681 08:36:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:56.681 08:36:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.681 08:36:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.681 08:36:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.938 08:36:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:56.938 08:36:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:56.938 08:36:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:57.195 08:36:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:57.195 08:36:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:57.195 08:36:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.453 08:36:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:57.453 08:36:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Iq3xkdJcQM 00:39:57.453 08:36:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Iq3xkdJcQM 00:39:57.710 08:36:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JtleE1P1Ow 00:39:57.710 08:36:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JtleE1P1Ow 00:39:57.967 08:36:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.967 08:36:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:58.224 nvme0n1 00:39:58.224 08:36:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:58.224 08:36:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:58.481 08:36:08 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:58.481 "subsystems": [ 00:39:58.481 { 00:39:58.481 "subsystem": "keyring", 00:39:58.481 "config": [ 00:39:58.481 { 00:39:58.481 "method": "keyring_file_add_key", 00:39:58.481 "params": { 00:39:58.481 "name": "key0", 00:39:58.481 "path": "/tmp/tmp.Iq3xkdJcQM" 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "keyring_file_add_key", 00:39:58.481 "params": { 00:39:58.481 "name": "key1", 00:39:58.481 "path": "/tmp/tmp.JtleE1P1Ow" 00:39:58.481 } 00:39:58.481 } 00:39:58.481 ] 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "subsystem": "iobuf", 00:39:58.481 "config": [ 00:39:58.481 { 00:39:58.481 "method": "iobuf_set_options", 00:39:58.481 "params": { 00:39:58.481 "small_pool_count": 8192, 00:39:58.481 "large_pool_count": 1024, 00:39:58.481 "small_bufsize": 8192, 00:39:58.481 "large_bufsize": 135168 00:39:58.481 } 00:39:58.481 } 00:39:58.481 ] 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "subsystem": "sock", 00:39:58.481 "config": [ 00:39:58.481 { 00:39:58.481 "method": "sock_set_default_impl", 00:39:58.481 "params": { 00:39:58.481 "impl_name": "posix" 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "sock_impl_set_options", 00:39:58.481 "params": { 00:39:58.481 "impl_name": "ssl", 00:39:58.481 "recv_buf_size": 4096, 00:39:58.481 "send_buf_size": 4096, 00:39:58.481 "enable_recv_pipe": true, 00:39:58.481 "enable_quickack": false, 00:39:58.481 "enable_placement_id": 0, 00:39:58.481 "enable_zerocopy_send_server": true, 00:39:58.481 "enable_zerocopy_send_client": false, 00:39:58.481 "zerocopy_threshold": 0, 00:39:58.481 "tls_version": 0, 00:39:58.481 "enable_ktls": false 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "sock_impl_set_options", 00:39:58.481 "params": { 00:39:58.481 "impl_name": "posix", 00:39:58.481 "recv_buf_size": 2097152, 00:39:58.481 "send_buf_size": 2097152, 00:39:58.481 "enable_recv_pipe": true, 00:39:58.481 "enable_quickack": false, 00:39:58.481 "enable_placement_id": 0, 00:39:58.481 "enable_zerocopy_send_server": true, 00:39:58.481 "enable_zerocopy_send_client": false, 00:39:58.481 "zerocopy_threshold": 0, 00:39:58.481 "tls_version": 0, 00:39:58.481 "enable_ktls": false 00:39:58.481 } 00:39:58.481 } 00:39:58.481 ] 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "subsystem": "vmd", 00:39:58.481 "config": [] 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "subsystem": "accel", 00:39:58.481 "config": [ 00:39:58.481 { 00:39:58.481 "method": "accel_set_options", 00:39:58.481 "params": { 00:39:58.481 "small_cache_size": 128, 00:39:58.481 "large_cache_size": 16, 00:39:58.481 "task_count": 2048, 00:39:58.481 "sequence_count": 2048, 00:39:58.481 "buf_count": 2048 00:39:58.481 } 00:39:58.481 } 00:39:58.481 ] 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "subsystem": "bdev", 00:39:58.481 "config": [ 00:39:58.481 { 00:39:58.481 "method": "bdev_set_options", 00:39:58.481 "params": { 00:39:58.481 "bdev_io_pool_size": 65535, 00:39:58.481 "bdev_io_cache_size": 256, 00:39:58.481 "bdev_auto_examine": true, 00:39:58.481 "iobuf_small_cache_size": 128, 00:39:58.481 "iobuf_large_cache_size": 16 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "bdev_raid_set_options", 00:39:58.481 "params": { 00:39:58.481 "process_window_size_kb": 1024, 00:39:58.481 "process_max_bandwidth_mb_sec": 0 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "bdev_iscsi_set_options", 00:39:58.481 "params": { 00:39:58.481 "timeout_sec": 30 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "bdev_nvme_set_options", 00:39:58.481 "params": { 00:39:58.481 "action_on_timeout": "none", 00:39:58.481 "timeout_us": 0, 00:39:58.481 "timeout_admin_us": 0, 00:39:58.481 "keep_alive_timeout_ms": 10000, 00:39:58.481 "arbitration_burst": 0, 00:39:58.481 "low_priority_weight": 0, 00:39:58.481 "medium_priority_weight": 0, 00:39:58.481 "high_priority_weight": 0, 00:39:58.481 "nvme_adminq_poll_period_us": 10000, 00:39:58.481 "nvme_ioq_poll_period_us": 0, 00:39:58.481 "io_queue_requests": 512, 00:39:58.481 "delay_cmd_submit": true, 00:39:58.481 "transport_retry_count": 4, 00:39:58.481 "bdev_retry_count": 3, 00:39:58.481 "transport_ack_timeout": 0, 00:39:58.481 "ctrlr_loss_timeout_sec": 0, 00:39:58.481 "reconnect_delay_sec": 0, 00:39:58.481 "fast_io_fail_timeout_sec": 0, 00:39:58.481 "disable_auto_failback": false, 00:39:58.481 "generate_uuids": false, 00:39:58.481 "transport_tos": 0, 00:39:58.481 "nvme_error_stat": false, 00:39:58.481 "rdma_srq_size": 0, 00:39:58.481 "io_path_stat": false, 00:39:58.481 "allow_accel_sequence": false, 00:39:58.481 "rdma_max_cq_size": 0, 00:39:58.481 "rdma_cm_event_timeout_ms": 0, 00:39:58.481 "dhchap_digests": [ 00:39:58.481 "sha256", 00:39:58.481 "sha384", 00:39:58.481 "sha512" 00:39:58.481 ], 00:39:58.481 "dhchap_dhgroups": [ 00:39:58.481 "null", 00:39:58.481 "ffdhe2048", 00:39:58.481 "ffdhe3072", 00:39:58.481 "ffdhe4096", 00:39:58.481 "ffdhe6144", 00:39:58.481 "ffdhe8192" 00:39:58.481 ] 00:39:58.481 } 00:39:58.481 }, 00:39:58.481 { 00:39:58.481 "method": "bdev_nvme_attach_controller", 00:39:58.481 "params": { 00:39:58.481 "name": "nvme0", 00:39:58.481 "trtype": "TCP", 00:39:58.481 "adrfam": "IPv4", 00:39:58.481 "traddr": "127.0.0.1", 00:39:58.481 "trsvcid": "4420", 00:39:58.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.481 "prchk_reftag": false, 00:39:58.481 "prchk_guard": false, 00:39:58.481 "ctrlr_loss_timeout_sec": 0, 00:39:58.481 "reconnect_delay_sec": 0, 00:39:58.481 "fast_io_fail_timeout_sec": 0, 00:39:58.481 "psk": "key0", 00:39:58.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.482 "hdgst": false, 00:39:58.482 "ddgst": false 00:39:58.482 } 00:39:58.482 }, 00:39:58.482 { 00:39:58.482 "method": "bdev_nvme_set_hotplug", 00:39:58.482 "params": { 00:39:58.482 "period_us": 100000, 00:39:58.482 "enable": false 00:39:58.482 } 00:39:58.482 }, 00:39:58.482 { 00:39:58.482 "method": "bdev_wait_for_examine" 00:39:58.482 } 00:39:58.482 ] 00:39:58.482 }, 00:39:58.482 { 00:39:58.482 "subsystem": "nbd", 00:39:58.482 "config": [] 00:39:58.482 } 00:39:58.482 ] 00:39:58.482 }' 00:39:58.482 08:36:08 keyring_file -- keyring/file.sh@114 -- # killprocess 110568 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 110568 ']' 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 110568 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110568 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110568' 00:39:58.482 killing process with pid 110568 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@967 -- # kill 110568 00:39:58.482 Received shutdown signal, test time was about 1.000000 seconds 00:39:58.482 00:39:58.482 Latency(us) 00:39:58.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.482 =================================================================================================================== 00:39:58.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:58.482 08:36:08 keyring_file -- common/autotest_common.sh@972 -- # wait 110568 00:39:58.739 08:36:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=112498 00:39:58.739 08:36:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 112498 /var/tmp/bperf.sock 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 112498 ']' 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:58.739 08:36:08 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:58.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:58.739 08:36:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:58.739 08:36:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:58.739 "subsystems": [ 00:39:58.739 { 00:39:58.739 "subsystem": "keyring", 00:39:58.739 "config": [ 00:39:58.739 { 00:39:58.739 "method": "keyring_file_add_key", 00:39:58.739 "params": { 00:39:58.739 "name": "key0", 00:39:58.739 "path": "/tmp/tmp.Iq3xkdJcQM" 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "keyring_file_add_key", 00:39:58.739 "params": { 00:39:58.739 "name": "key1", 00:39:58.739 "path": "/tmp/tmp.JtleE1P1Ow" 00:39:58.739 } 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "iobuf", 00:39:58.739 "config": [ 00:39:58.739 { 00:39:58.739 "method": "iobuf_set_options", 00:39:58.739 "params": { 00:39:58.739 "small_pool_count": 8192, 00:39:58.739 "large_pool_count": 1024, 00:39:58.739 "small_bufsize": 8192, 00:39:58.739 "large_bufsize": 135168 00:39:58.739 } 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "sock", 00:39:58.739 "config": [ 00:39:58.739 { 00:39:58.739 "method": "sock_set_default_impl", 00:39:58.739 "params": { 00:39:58.739 "impl_name": "posix" 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "sock_impl_set_options", 00:39:58.739 "params": { 00:39:58.739 "impl_name": "ssl", 00:39:58.739 "recv_buf_size": 4096, 00:39:58.739 "send_buf_size": 4096, 00:39:58.739 "enable_recv_pipe": true, 00:39:58.739 "enable_quickack": false, 00:39:58.739 "enable_placement_id": 0, 00:39:58.739 "enable_zerocopy_send_server": true, 00:39:58.739 "enable_zerocopy_send_client": false, 00:39:58.739 "zerocopy_threshold": 0, 00:39:58.739 "tls_version": 0, 00:39:58.739 "enable_ktls": false 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "sock_impl_set_options", 00:39:58.739 "params": { 00:39:58.739 "impl_name": "posix", 00:39:58.739 "recv_buf_size": 2097152, 00:39:58.739 "send_buf_size": 2097152, 00:39:58.739 "enable_recv_pipe": true, 00:39:58.739 "enable_quickack": false, 00:39:58.739 "enable_placement_id": 0, 00:39:58.739 "enable_zerocopy_send_server": true, 00:39:58.739 "enable_zerocopy_send_client": false, 00:39:58.739 "zerocopy_threshold": 0, 00:39:58.739 "tls_version": 0, 00:39:58.739 "enable_ktls": false 00:39:58.739 } 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "vmd", 00:39:58.739 "config": [] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "accel", 00:39:58.739 "config": [ 00:39:58.739 { 00:39:58.739 "method": "accel_set_options", 00:39:58.739 "params": { 00:39:58.739 "small_cache_size": 128, 00:39:58.739 "large_cache_size": 16, 00:39:58.739 "task_count": 2048, 00:39:58.739 "sequence_count": 2048, 00:39:58.739 "buf_count": 2048 00:39:58.739 } 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "bdev", 00:39:58.739 "config": [ 00:39:58.739 { 00:39:58.739 "method": "bdev_set_options", 00:39:58.739 "params": { 00:39:58.739 "bdev_io_pool_size": 65535, 00:39:58.739 "bdev_io_cache_size": 256, 00:39:58.739 "bdev_auto_examine": true, 00:39:58.739 "iobuf_small_cache_size": 128, 00:39:58.739 "iobuf_large_cache_size": 16 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_raid_set_options", 00:39:58.739 "params": { 00:39:58.739 "process_window_size_kb": 1024, 00:39:58.739 "process_max_bandwidth_mb_sec": 0 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_iscsi_set_options", 00:39:58.739 "params": { 00:39:58.739 "timeout_sec": 30 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_nvme_set_options", 00:39:58.739 "params": { 00:39:58.739 "action_on_timeout": "none", 00:39:58.739 "timeout_us": 0, 00:39:58.739 "timeout_admin_us": 0, 00:39:58.739 "keep_alive_timeout_ms": 10000, 00:39:58.739 "arbitration_burst": 0, 00:39:58.739 "low_priority_weight": 0, 00:39:58.739 "medium_priority_weight": 0, 00:39:58.739 "high_priority_weight": 0, 00:39:58.739 "nvme_adminq_poll_period_us": 10000, 00:39:58.739 "nvme_ioq_poll_period_us": 0, 00:39:58.739 "io_queue_requests": 512, 00:39:58.739 "delay_cmd_submit": true, 00:39:58.739 "transport_retry_count": 4, 00:39:58.739 "bdev_retry_count": 3, 00:39:58.739 "transport_ack_timeout": 0, 00:39:58.739 "ctrlr_loss_timeout_sec": 0, 00:39:58.739 "reconnect_delay_sec": 0, 00:39:58.739 "fast_io_fail_timeout_sec": 0, 00:39:58.739 "disable_auto_failback": false, 00:39:58.739 "generate_uuids": false, 00:39:58.739 "transport_tos": 0, 00:39:58.739 "nvme_error_stat": false, 00:39:58.739 "rdma_srq_size": 0, 00:39:58.739 "io_path_stat": false, 00:39:58.739 "allow_accel_sequence": false, 00:39:58.739 "rdma_max_cq_size": 0, 00:39:58.739 "rdma_cm_event_timeout_ms": 0, 00:39:58.739 "dhchap_digests": [ 00:39:58.739 "sha256", 00:39:58.739 "sha384", 00:39:58.739 "sha512" 00:39:58.739 ], 00:39:58.739 "dhchap_dhgroups": [ 00:39:58.739 "null", 00:39:58.739 "ffdhe2048", 00:39:58.739 "ffdhe3072", 00:39:58.739 "ffdhe4096", 00:39:58.739 "ffdhe6144", 00:39:58.739 "ffdhe8192" 00:39:58.739 ] 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_nvme_attach_controller", 00:39:58.739 "params": { 00:39:58.739 "name": "nvme0", 00:39:58.739 "trtype": "TCP", 00:39:58.739 "adrfam": "IPv4", 00:39:58.739 "traddr": "127.0.0.1", 00:39:58.739 "trsvcid": "4420", 00:39:58.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.739 "prchk_reftag": false, 00:39:58.739 "prchk_guard": false, 00:39:58.739 "ctrlr_loss_timeout_sec": 0, 00:39:58.739 "reconnect_delay_sec": 0, 00:39:58.739 "fast_io_fail_timeout_sec": 0, 00:39:58.739 "psk": "key0", 00:39:58.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.739 "hdgst": false, 00:39:58.739 "ddgst": false 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_nvme_set_hotplug", 00:39:58.739 "params": { 00:39:58.739 "period_us": 100000, 00:39:58.739 "enable": false 00:39:58.739 } 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "method": "bdev_wait_for_examine" 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }, 00:39:58.739 { 00:39:58.739 "subsystem": "nbd", 00:39:58.739 "config": [] 00:39:58.739 } 00:39:58.739 ] 00:39:58.739 }' 00:39:58.739 [2024-07-21 08:36:08.304643] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:39:58.739 [2024-07-21 08:36:08.304740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112498 ] 00:39:58.739 EAL: No free 2048 kB hugepages reported on node 1 00:39:58.739 [2024-07-21 08:36:08.365353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.996 [2024-07-21 08:36:08.457205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.253 [2024-07-21 08:36:08.646478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:59.816 08:36:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:59.816 08:36:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:59.816 08:36:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:59.816 08:36:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:59.816 08:36:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.072 08:36:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:00.072 08:36:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:00.072 08:36:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:00.072 08:36:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.072 08:36:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.072 08:36:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.072 08:36:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:00.330 08:36:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:00.330 08:36:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:00.330 08:36:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:00.330 08:36:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.330 08:36:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.330 08:36:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.330 08:36:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:00.587 08:36:10 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:00.587 08:36:10 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:00.587 08:36:10 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:00.587 08:36:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:00.846 08:36:10 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:00.846 08:36:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:00.846 08:36:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Iq3xkdJcQM /tmp/tmp.JtleE1P1Ow 00:40:00.846 08:36:10 keyring_file -- keyring/file.sh@20 -- # killprocess 112498 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 112498 ']' 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 112498 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112498 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112498' 00:40:00.846 killing process with pid 112498 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@967 -- # kill 112498 00:40:00.846 Received shutdown signal, test time was about 1.000000 seconds 00:40:00.846 00:40:00.846 Latency(us) 00:40:00.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.846 =================================================================================================================== 00:40:00.846 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:00.846 08:36:10 keyring_file -- common/autotest_common.sh@972 -- # wait 112498 00:40:01.105 08:36:10 keyring_file -- keyring/file.sh@21 -- # killprocess 110559 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 110559 ']' 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 110559 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110559 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110559' 00:40:01.105 killing process with pid 110559 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@967 -- # kill 110559 00:40:01.105 [2024-07-21 08:36:10.565507] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:01.105 08:36:10 keyring_file -- common/autotest_common.sh@972 -- # wait 110559 00:40:01.365 00:40:01.365 real 0m14.391s 00:40:01.365 user 0m35.963s 00:40:01.365 sys 0m3.343s 00:40:01.365 08:36:10 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:01.365 08:36:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:01.365 ************************************ 00:40:01.365 END TEST keyring_file 00:40:01.365 ************************************ 00:40:01.624 08:36:11 -- common/autotest_common.sh@1142 -- # return 0 00:40:01.624 08:36:11 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:01.624 08:36:11 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:01.624 08:36:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:01.624 08:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:01.624 08:36:11 -- common/autotest_common.sh@10 -- # set +x 00:40:01.624 ************************************ 00:40:01.624 START TEST keyring_linux 00:40:01.624 ************************************ 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:01.624 * Looking for test storage... 00:40:01.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:01.624 08:36:11 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:01.624 08:36:11 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:01.624 08:36:11 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:01.624 08:36:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.624 08:36:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.624 08:36:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.624 08:36:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:01.624 08:36:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:01.624 /tmp/:spdk-test:key0 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:01.624 08:36:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:01.624 08:36:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:01.624 /tmp/:spdk-test:key1 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=113011 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:01.624 08:36:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 113011 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 113011 ']' 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:01.624 08:36:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:01.624 [2024-07-21 08:36:11.241878] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:40:01.624 [2024-07-21 08:36:11.241973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113011 ] 00:40:01.882 EAL: No free 2048 kB hugepages reported on node 1 00:40:01.882 [2024-07-21 08:36:11.305216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.882 [2024-07-21 08:36:11.402467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.139 08:36:11 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:02.139 08:36:11 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:02.139 08:36:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:02.139 08:36:11 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:02.139 08:36:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:02.139 [2024-07-21 08:36:11.645335] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:02.139 null0 00:40:02.139 [2024-07-21 08:36:11.677389] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:02.139 [2024-07-21 08:36:11.677858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:02.139 08:36:11 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:02.139 08:36:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:02.139 526923891 00:40:02.139 08:36:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:02.139 484116076 00:40:02.139 08:36:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=113135 00:40:02.140 08:36:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:02.140 08:36:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 113135 /var/tmp/bperf.sock 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 113135 ']' 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:02.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:02.140 08:36:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:02.140 [2024-07-21 08:36:11.744946] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 22.11.4 initialization... 00:40:02.140 [2024-07-21 08:36:11.745023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113135 ] 00:40:02.398 EAL: No free 2048 kB hugepages reported on node 1 00:40:02.398 [2024-07-21 08:36:11.810373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.398 [2024-07-21 08:36:11.900932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.398 08:36:11 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:02.398 08:36:11 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:02.398 08:36:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:02.398 08:36:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:02.656 08:36:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:02.656 08:36:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:02.914 08:36:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:02.914 08:36:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:03.172 [2024-07-21 08:36:12.743491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:03.431 nvme0n1 00:40:03.431 08:36:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:03.431 08:36:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:03.431 08:36:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:03.431 08:36:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:03.431 08:36:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:03.431 08:36:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:03.689 08:36:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:03.689 08:36:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:03.689 08:36:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@25 -- # sn=526923891 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:03.689 08:36:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:03.946 08:36:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 526923891 == \5\2\6\9\2\3\8\9\1 ]] 00:40:03.946 08:36:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 526923891 00:40:03.946 08:36:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:03.946 08:36:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:03.946 Running I/O for 1 seconds... 00:40:04.881 00:40:04.881 Latency(us) 00:40:04.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:04.881 nvme0n1 : 1.01 8553.74 33.41 0.00 0.00 14853.15 3956.43 19612.25 00:40:04.881 =================================================================================================================== 00:40:04.881 Total : 8553.74 33.41 0.00 0.00 14853.15 3956.43 19612.25 00:40:04.881 0 00:40:04.881 08:36:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:04.881 08:36:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:05.139 08:36:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:05.139 08:36:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:05.139 08:36:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:05.139 08:36:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:05.139 08:36:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.139 08:36:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:05.396 08:36:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:05.396 08:36:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:05.396 08:36:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:05.396 08:36:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:05.396 08:36:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:05.397 08:36:14 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:05.397 08:36:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:05.655 [2024-07-21 08:36:15.193330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:05.655 [2024-07-21 08:36:15.193936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99c020 (107): Transport endpoint is not connected 00:40:05.655 [2024-07-21 08:36:15.194920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99c020 (9): Bad file descriptor 00:40:05.655 [2024-07-21 08:36:15.195925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:05.655 [2024-07-21 08:36:15.195955] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:05.655 [2024-07-21 08:36:15.195970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:05.655 request: 00:40:05.655 { 00:40:05.655 "name": "nvme0", 00:40:05.655 "trtype": "tcp", 00:40:05.655 "traddr": "127.0.0.1", 00:40:05.655 "adrfam": "ipv4", 00:40:05.655 "trsvcid": "4420", 00:40:05.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:05.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:05.655 "prchk_reftag": false, 00:40:05.655 "prchk_guard": false, 00:40:05.655 "hdgst": false, 00:40:05.655 "ddgst": false, 00:40:05.655 "psk": ":spdk-test:key1", 00:40:05.655 "method": "bdev_nvme_attach_controller", 00:40:05.655 "req_id": 1 00:40:05.655 } 00:40:05.655 Got JSON-RPC error response 00:40:05.655 response: 00:40:05.655 { 00:40:05.655 "code": -5, 00:40:05.655 "message": "Input/output error" 00:40:05.655 } 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@33 -- # sn=526923891 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 526923891 00:40:05.655 1 links removed 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@33 -- # sn=484116076 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 484116076 00:40:05.655 1 links removed 00:40:05.655 08:36:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 113135 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 113135 ']' 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 113135 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113135 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113135' 00:40:05.655 killing process with pid 113135 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@967 -- # kill 113135 00:40:05.655 Received shutdown signal, test time was about 1.000000 seconds 00:40:05.655 00:40:05.655 Latency(us) 00:40:05.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:05.655 =================================================================================================================== 00:40:05.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:05.655 08:36:15 keyring_linux -- common/autotest_common.sh@972 -- # wait 113135 00:40:05.915 08:36:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 113011 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 113011 ']' 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 113011 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113011 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113011' 00:40:05.915 killing process with pid 113011 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@967 -- # kill 113011 00:40:05.915 08:36:15 keyring_linux -- common/autotest_common.sh@972 -- # wait 113011 00:40:06.481 00:40:06.481 real 0m4.845s 00:40:06.481 user 0m9.204s 00:40:06.481 sys 0m1.645s 00:40:06.481 08:36:15 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:06.481 08:36:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:06.481 ************************************ 00:40:06.481 END TEST keyring_linux 00:40:06.481 ************************************ 00:40:06.481 08:36:15 -- common/autotest_common.sh@1142 -- # return 0 00:40:06.481 08:36:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:06.481 08:36:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:06.481 08:36:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:06.481 08:36:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:06.481 08:36:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:06.481 08:36:15 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:06.481 08:36:15 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:06.481 08:36:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:06.481 08:36:15 -- common/autotest_common.sh@10 -- # set +x 00:40:06.481 08:36:15 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:06.481 08:36:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:06.481 08:36:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:06.481 08:36:15 -- common/autotest_common.sh@10 -- # set +x 00:40:08.386 INFO: APP EXITING 00:40:08.386 INFO: killing all VMs 00:40:08.386 INFO: killing vhost app 00:40:08.386 INFO: EXIT DONE 00:40:09.320 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:09.320 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:09.320 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:09.320 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:09.320 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:09.320 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:09.320 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:09.320 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:09.320 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:09.320 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:09.320 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:09.320 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:09.320 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:09.320 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:09.320 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:09.320 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:09.320 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:10.726 Cleaning 00:40:10.726 Removing: /var/run/dpdk/spdk0/config 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:10.726 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:10.726 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:10.726 Removing: /var/run/dpdk/spdk1/config 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:10.726 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:10.726 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:10.726 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:10.726 Removing: /var/run/dpdk/spdk2/config 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:10.726 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:10.726 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:10.726 Removing: /var/run/dpdk/spdk3/config 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:10.726 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:10.726 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:10.726 Removing: /var/run/dpdk/spdk4/config 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:10.726 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:10.726 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:10.726 Removing: /dev/shm/bdev_svc_trace.1 00:40:10.726 Removing: /dev/shm/nvmf_trace.0 00:40:10.726 Removing: /dev/shm/spdk_tgt_trace.pid3986493 00:40:10.726 Removing: /var/run/dpdk/spdk0 00:40:10.726 Removing: /var/run/dpdk/spdk1 00:40:10.726 Removing: /var/run/dpdk/spdk2 00:40:10.726 Removing: /var/run/dpdk/spdk3 00:40:10.726 Removing: /var/run/dpdk/spdk4 00:40:10.726 Removing: /var/run/dpdk/spdk_pid10432 00:40:10.726 Removing: /var/run/dpdk/spdk_pid105102 00:40:10.726 Removing: /var/run/dpdk/spdk_pid105494 00:40:10.726 Removing: /var/run/dpdk/spdk_pid105886 00:40:10.726 Removing: /var/run/dpdk/spdk_pid107440 00:40:10.726 Removing: /var/run/dpdk/spdk_pid107719 00:40:10.726 Removing: /var/run/dpdk/spdk_pid108114 00:40:10.726 Removing: /var/run/dpdk/spdk_pid110559 00:40:10.726 Removing: /var/run/dpdk/spdk_pid110568 00:40:10.988 Removing: /var/run/dpdk/spdk_pid112498 00:40:10.988 Removing: /var/run/dpdk/spdk_pid113011 00:40:10.988 Removing: /var/run/dpdk/spdk_pid113135 00:40:10.988 Removing: /var/run/dpdk/spdk_pid11372 00:40:10.988 Removing: /var/run/dpdk/spdk_pid12569 00:40:10.988 Removing: /var/run/dpdk/spdk_pid15506 00:40:10.988 Removing: /var/run/dpdk/spdk_pid17838 00:40:10.988 Removing: /var/run/dpdk/spdk_pid21936 00:40:10.988 Removing: /var/run/dpdk/spdk_pid22003 00:40:10.988 Removing: /var/run/dpdk/spdk_pid24779 00:40:10.988 Removing: /var/run/dpdk/spdk_pid24957 00:40:10.988 Removing: /var/run/dpdk/spdk_pid25087 00:40:10.988 Removing: /var/run/dpdk/spdk_pid25353 00:40:10.988 Removing: /var/run/dpdk/spdk_pid25368 00:40:10.988 Removing: /var/run/dpdk/spdk_pid26433 00:40:10.988 Removing: /var/run/dpdk/spdk_pid27632 00:40:10.988 Removing: /var/run/dpdk/spdk_pid28893 00:40:10.988 Removing: /var/run/dpdk/spdk_pid30098 00:40:10.988 Removing: /var/run/dpdk/spdk_pid31280 00:40:10.988 Removing: /var/run/dpdk/spdk_pid32456 00:40:10.988 Removing: /var/run/dpdk/spdk_pid36134 00:40:10.988 Removing: /var/run/dpdk/spdk_pid36589 00:40:10.988 Removing: /var/run/dpdk/spdk_pid37865 00:40:10.988 Removing: /var/run/dpdk/spdk_pid38604 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3907 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3984944 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3985681 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3986493 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3986932 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3987623 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3987761 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3988473 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3988490 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3988731 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3989931 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3990838 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3991125 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3991329 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3991530 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3991726 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3991885 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3992038 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3992220 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3992665 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995018 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995180 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995342 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995351 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995780 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3995785 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996217 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996220 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996511 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996526 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996688 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3996758 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3997187 00:40:10.988 Removing: /var/run/dpdk/spdk_pid3997342 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3997533 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3997733 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3997786 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3998000 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3998185 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3998338 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3998614 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3998771 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3999341 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3999634 00:40:10.989 Removing: /var/run/dpdk/spdk_pid3999859 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000021 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000175 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000449 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000607 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000761 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4000942 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4001188 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4001353 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4001509 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4001783 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4001944 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4002104 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4002261 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4002451 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4002655 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4004771 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4058393 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4061016 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4067858 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4071143 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4073371 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4073787 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4077733 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4081446 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4081453 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4082099 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4082711 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4083296 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4083691 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4083702 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4083961 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4084046 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4084096 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4084645 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4085289 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4085948 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4086347 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4086351 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4086540 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4087436 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4088318 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4094173 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4094329 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4096955 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4100533 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4102693 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4108946 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4114130 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4115335 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4115999 00:40:10.989 Removing: /var/run/dpdk/spdk_pid4126696 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4128894 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4154028 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4156801 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4157984 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4159290 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4159314 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4159444 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4159579 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4159895 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4161209 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4161873 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4162238 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4163845 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4164161 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4164708 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4167230 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4170616 00:40:11.248 Removing: /var/run/dpdk/spdk_pid4174031 00:40:11.248 Removing: /var/run/dpdk/spdk_pid42196 00:40:11.248 Removing: /var/run/dpdk/spdk_pid44214 00:40:11.248 Removing: /var/run/dpdk/spdk_pid48072 00:40:11.248 Removing: /var/run/dpdk/spdk_pid51500 00:40:11.248 Removing: /var/run/dpdk/spdk_pid57714 00:40:11.248 Removing: /var/run/dpdk/spdk_pid62048 00:40:11.248 Removing: /var/run/dpdk/spdk_pid62050 00:40:11.248 Removing: /var/run/dpdk/spdk_pid6667 00:40:11.248 Removing: /var/run/dpdk/spdk_pid74245 00:40:11.248 Removing: /var/run/dpdk/spdk_pid74651 00:40:11.248 Removing: /var/run/dpdk/spdk_pid75062 00:40:11.248 Removing: /var/run/dpdk/spdk_pid75582 00:40:11.248 Removing: /var/run/dpdk/spdk_pid76150 00:40:11.248 Removing: /var/run/dpdk/spdk_pid76566 00:40:11.248 Removing: /var/run/dpdk/spdk_pid76980 00:40:11.248 Removing: /var/run/dpdk/spdk_pid77381 00:40:11.248 Removing: /var/run/dpdk/spdk_pid80002 00:40:11.248 Removing: /var/run/dpdk/spdk_pid80148 00:40:11.248 Removing: /var/run/dpdk/spdk_pid84439 00:40:11.248 Removing: /var/run/dpdk/spdk_pid84608 00:40:11.248 Removing: /var/run/dpdk/spdk_pid86214 00:40:11.248 Removing: /var/run/dpdk/spdk_pid91127 00:40:11.248 Removing: /var/run/dpdk/spdk_pid91132 00:40:11.248 Removing: /var/run/dpdk/spdk_pid94034 00:40:11.248 Removing: /var/run/dpdk/spdk_pid95431 00:40:11.248 Removing: /var/run/dpdk/spdk_pid96829 00:40:11.248 Removing: /var/run/dpdk/spdk_pid97574 00:40:11.248 Removing: /var/run/dpdk/spdk_pid98974 00:40:11.248 Removing: /var/run/dpdk/spdk_pid99845 00:40:11.248 Clean 00:40:11.248 08:36:20 -- common/autotest_common.sh@1451 -- # return 0 00:40:11.248 08:36:20 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:11.248 08:36:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:11.248 08:36:20 -- common/autotest_common.sh@10 -- # set +x 00:40:11.248 08:36:20 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:11.248 08:36:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:11.248 08:36:20 -- common/autotest_common.sh@10 -- # set +x 00:40:11.248 08:36:20 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:11.248 08:36:20 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:11.248 08:36:20 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:11.248 08:36:20 -- spdk/autotest.sh@391 -- # hash lcov 00:40:11.248 08:36:20 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:11.248 08:36:20 -- spdk/autotest.sh@393 -- # hostname 00:40:11.249 08:36:20 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:11.507 geninfo: WARNING: invalid characters removed from testname! 00:40:43.560 08:36:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:43.560 08:36:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:46.084 08:36:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:49.360 08:36:58 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:51.882 08:37:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:55.156 08:37:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:57.687 08:37:07 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:57.945 08:37:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:57.945 08:37:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:57.945 08:37:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.945 08:37:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.945 08:37:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.945 08:37:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.945 08:37:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.945 08:37:07 -- paths/export.sh@5 -- $ export PATH 00:40:57.945 08:37:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.945 08:37:07 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:57.945 08:37:07 -- common/autobuild_common.sh@447 -- $ date +%s 00:40:57.945 08:37:07 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721543827.XXXXXX 00:40:57.945 08:37:07 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721543827.WqoMmR 00:40:57.945 08:37:07 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:40:57.945 08:37:07 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:40:57.945 08:37:07 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:40:57.945 08:37:07 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:40:57.945 08:37:07 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:57.945 08:37:07 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:57.945 08:37:07 -- common/autobuild_common.sh@463 -- $ get_config_params 00:40:57.945 08:37:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:40:57.945 08:37:07 -- common/autotest_common.sh@10 -- $ set +x 00:40:57.945 08:37:07 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:40:57.945 08:37:07 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:40:57.945 08:37:07 -- pm/common@17 -- $ local monitor 00:40:57.945 08:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:57.945 08:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:57.945 08:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:57.945 08:37:07 -- pm/common@21 -- $ date +%s 00:40:57.945 08:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:57.945 08:37:07 -- pm/common@21 -- $ date +%s 00:40:57.945 08:37:07 -- pm/common@25 -- $ sleep 1 00:40:57.945 08:37:07 -- pm/common@21 -- $ date +%s 00:40:57.945 08:37:07 -- pm/common@21 -- $ date +%s 00:40:57.945 08:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721543827 00:40:57.945 08:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721543827 00:40:57.945 08:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721543827 00:40:57.945 08:37:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721543827 00:40:57.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721543827_collect-vmstat.pm.log 00:40:57.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721543827_collect-cpu-load.pm.log 00:40:57.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721543827_collect-cpu-temp.pm.log 00:40:57.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721543827_collect-bmc-pm.bmc.pm.log 00:40:58.885 08:37:08 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:40:58.885 08:37:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:40:58.885 08:37:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:58.885 08:37:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:58.885 08:37:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:58.885 08:37:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:58.885 08:37:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:58.885 08:37:08 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:58.885 08:37:08 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:58.885 08:37:08 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:58.885 08:37:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:58.885 08:37:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:58.885 08:37:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:58.885 08:37:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:58.885 08:37:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.885 08:37:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:58.885 08:37:08 -- pm/common@44 -- $ pid=124386 00:40:58.885 08:37:08 -- pm/common@50 -- $ kill -TERM 124386 00:40:58.885 08:37:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.885 08:37:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:58.885 08:37:08 -- pm/common@44 -- $ pid=124388 00:40:58.885 08:37:08 -- pm/common@50 -- $ kill -TERM 124388 00:40:58.885 08:37:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.885 08:37:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:58.885 08:37:08 -- pm/common@44 -- $ pid=124389 00:40:58.885 08:37:08 -- pm/common@50 -- $ kill -TERM 124389 00:40:58.885 08:37:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.885 08:37:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:58.885 08:37:08 -- pm/common@44 -- $ pid=124418 00:40:58.885 08:37:08 -- pm/common@50 -- $ sudo -E kill -TERM 124418 00:40:58.885 + [[ -n 3880918 ]] 00:40:58.885 + sudo kill 3880918 00:40:58.897 [Pipeline] } 00:40:58.922 [Pipeline] // stage 00:40:58.928 [Pipeline] } 00:40:58.952 [Pipeline] // timeout 00:40:58.958 [Pipeline] } 00:40:58.978 [Pipeline] // catchError 00:40:58.984 [Pipeline] } 00:40:59.007 [Pipeline] // wrap 00:40:59.016 [Pipeline] } 00:40:59.033 [Pipeline] // catchError 00:40:59.044 [Pipeline] stage 00:40:59.047 [Pipeline] { (Epilogue) 00:40:59.060 [Pipeline] catchError 00:40:59.062 [Pipeline] { 00:40:59.077 [Pipeline] echo 00:40:59.079 Cleanup processes 00:40:59.085 [Pipeline] sh 00:40:59.371 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:59.371 124521 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:59.371 124650 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:59.392 [Pipeline] sh 00:40:59.682 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:59.682 ++ grep -v 'sudo pgrep' 00:40:59.682 ++ awk '{print $1}' 00:40:59.682 + sudo kill -9 124521 00:40:59.697 [Pipeline] sh 00:40:59.981 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:09.961 [Pipeline] sh 00:41:10.248 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:10.248 Artifacts sizes are good 00:41:10.267 [Pipeline] archiveArtifacts 00:41:10.277 Archiving artifacts 00:41:10.526 [Pipeline] sh 00:41:10.814 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:10.831 [Pipeline] cleanWs 00:41:10.841 [WS-CLEANUP] Deleting project workspace... 00:41:10.841 [WS-CLEANUP] Deferred wipeout is used... 00:41:10.848 [WS-CLEANUP] done 00:41:10.850 [Pipeline] } 00:41:10.872 [Pipeline] // catchError 00:41:10.888 [Pipeline] sh 00:41:11.169 + logger -p user.info -t JENKINS-CI 00:41:11.179 [Pipeline] } 00:41:11.196 [Pipeline] // stage 00:41:11.202 [Pipeline] } 00:41:11.222 [Pipeline] // node 00:41:11.226 [Pipeline] End of Pipeline 00:41:11.265 Finished: SUCCESS